Computing desk
< January 22 << Dec | January | Feb >> January 24 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 23

NFC and RFID[edit]

As I understand it near field communication is a relatively recent technology that allows devices such as mobile phones to function as both RFID readers and RFID transmitters. I'm a little unclear as to what this means in practice though. For example, does that mean that an NFC equipped phone could query a typical RFID smart card or keyless fob, record the signal it generates, and then duplicate it on demand for the associated keyless entry system? At present RFID keys tend to be relatively hard to duplicate (not that many people have the technology and know how), but it would seem that NFC will make duplicating RFID keys almost trivial. (So easy in fact that people could do it without the key's owner ever losing control of the key.) Is that a correct understanding of the technology? Dragons flight (talk) 00:53, 23 January 2011 (UTC)[reply]

Yes, all of the assumptions you made are infact true in principle. I don't know how this will play out in practice. Wallets with wire mesh pockets that sheild radio waves have been marketed (mostly online) for a few years now for this exact reason, to protect the RFID in bank cards, key fobs and so on from being read (and hence duplicated) without the knowledge of the owner. Roberto75780 (talk) 04:55, 23 January 2011 (UTC)[reply]

Presenting numbers in descending order in VB list box[edit]

My task is to make a form demonstrating various loops (For...Next, Do While, Do Until etc) by having two inputted numbers (lower bound and upper bound), so that even numbers go into an even list box and odd numbers go into the odd list box. As well, if the lower bound number is higher than the higher bound number - the list boxes are to present the numbers in descending order. I have had success with the For Next loop (as I used the "step" command) but I am having difficulty with the Do While loop. I can not seem to make the numbers present themselves in descending order. What should I be trying to employee here?24.89.210.71 (talk) 01:20, 23 January 2011 (UTC)[reply]

To present numbers in descending order you have to start at the highest number and subtract from that each time round the loop. To get a loop working correctly so it only outputs the numbers it should, you need to carefully consider how big a step is needed on each iteration and how each type of loop decides whether or not to go round again. Your teacher should have described how each loop works and particularly should have mentioned when each loop does its test on whether or not to continue. See For loop, Do while loop and While loop for more info. Astronaut (talk) 01:39, 23 January 2011 (UTC)[reply]
Do/While and For/Next loops have different properties. Just for example, here are two such loops in VB which do the same thing:
i = 5
Do while i > 0
print i
i = i - 1
Loop

For i = 5 to 1 Step -1
print i
Next i
Both of these will have the same output. In the For/Next loop, I change how the value iterates using Step, and set my bounds in the For statement itself. For the Do/While loop, I set the starting bound of the variable explicitly in the code, then I set an end condition, and then the iteration is done in the code itself (the i = i-1 part). Make sense? --Mr.98 (talk) 02:21, 23 January 2011 (UTC)[reply]

I appreciate that you have taken the time to reply and offer these suggestions. My code for my For Next Loop resembles yours and I am able to use "Step" as you have indicated. In the For Next Loop, I can just alter the "Step" direction (positive or negative) as required. My problem lies with the Do/While button as I can not seem to figure out how to incorporate direction. My code again resembles yours but since they are going to a list box - they are defaulting to display "ascending only" and I need the occasion for them to be displayed descending which seems to be eluding me24.89.210.71 (talk) 03:00, 23 January 2011 (UTC)[reply]

Is it possible the list box has its sorted property set to true? Astronaut (talk) 04:00, 23 January 2011 (UTC)[reply]
An easy way to test whether it is a problem with the List box or with the code is to have the list code instead output somewhere else — e.g., append the numbers to a string and then use Msgbox to show the string. It'll show if your problem is with the loop or the box. Otherwise, without seeing your code, there's not really any way for us to tell what is wrong. --Mr.98 (talk) 17:16, 23 January 2011 (UTC)[reply]

Future 128-Core Processor[edit]

Hello,

  I'm wondering whether in the near (or not so near) future, it may be possible to make a 128-core processor. I mean a processor with 128 CPU cores, not GPU cores or just ALUs. I envision 128 cores on a single die, arranged in a 12x12 square, with a 4x4 vacancy in the middle. Could this be possible, and any thoughts on how many years it might be before such processors come along?

This is really more of a theoretical discussion than a question.

  Thanks to everyone. Rocketshiporion 02:48, 23 January 2011 (UTC)[reply]

Intel labs designed a 128 core chip in 2007 http://gizmodo.com/239761/intel-develops-128+core-terascale-superchip . But putting lots of cores on a die doesn't itself do you much good. The problem with all multi-core operation is contention for inter-processor (and processor-memory) connections. It's initially tempting to build a connection from each processor to all the others (thats a complete graph) but when you have 128 nodes, which would need over 8,000 full bandwidth interconnects (and the entire die surface would be nothing but interconnects). So designers use a crossbar switch (or the like) to manage the interconnections : but this means that there is contention for the switch, and the more cores you have the more contention. Depending on the task, you can easily get so almost all of each core's time is spent waiting on the crossbar (and so there's no point in having so many cores). An attempt to mitigate this is non-uniform memory; that's where each core has some memory of its own, and it only goes to the crossbar to access global memory or to talk to other cores. The IBM Cell processor (that drives a PS3) has non-local memory, as (in different dies) does IBM's NUMA-Q architecture. But this leads to the second problem: writing programs for these things. Writing programs that efficiently make use of limited local memory and access non-local memory and other cpus over the crossbar efficiently is, in general, beyond the current state of human programmers, compilers, and run-time systems. A few tasks that are intrinsically and obviously parallelisable (like media coding and image processing) can be automatically divided up. But for general computing (and that's what you use a CPU for) it's not really possible (and to the extent that it is, not really worthwhile). There's some hope in implicitly parallelisable programming models (like Parallel Haskell), but even then filling 128 cores properly is a tall order. This is the real worry for those who hope CPU performance will continue to increase - clock speeds really haven't advanced much in several years, with effort instead going into multiple cores. But as you add cores you lose efficiency in the organisation of the distributed task, until you reach a point where adding more cores does no good at all. For most tasks, that's way before you get to 128. 87.113.206.61 (talk) 03:15, 23 January 2011 (UTC)[reply]
If you're thinking "well, servers already have lots of cores, albeit in different CPUs", you'd be right (Sun, HP and IBM machines scale up to 512 cores or so). But servers run hundreds of concurrent, unrelated operations (like webserver queries) which don't contend (much) with each other. When they do (when one thread in a database server has to write to an index that 100 other threads are using) performance crashes. Lot of the work in developing an industrial-scale database server is minimising the circumstances where, and consequences of, such contention. Similarly supercomputers and distributed compute clusters (like beowulf) have thousands of cores, and potentially suffer from all the same problems above. The stuff people typically run on supercomputers (things like finite element analysis) are highly parallelisable; they're not the general problems most people want their computer to solve. 87.113.206.61 (talk) 03:29, 23 January 2011 (UTC)[reply]
As mentioned before, there are already research processors with 128 cores. For a commercial example, I don't think there are any processors with 128 cores, but Tilera already has the 100-core TILE-Gx100. The Gx100's cores are arranged in a 10 by 10 array, and a mesh network connects them together. But unlike the Teraflops Research Chip, the Gx100 has no floating-point units, the cores implement a 64-bit, 3-way VLIW, integer-only architecture. If I am not mistaken, Tilera is to introduce its next generation TILE processors this year, and these will have some 120 to 130 cores. If you want to look at more specialized processors, I believe there are embedded processors with hundreds of 8- or 16-bit cores for digital signal processing, but these processors have cores that are very limited. Rilak (talk) 03:50, 23 January 2011 (UTC)[reply]
The Tilera processors' cores don't appear to be full-function CPUs. I was getting at a 128-core processor where each core has atleast as much functionality as a Nehalem or Westmere core, and the 128-core processor is a computer's primary processor. The Tilera cores seem to either for use in computing appliances or for co-processing. Rocketshiporion 10:54, 23 January 2011 (UTC)[reply]
It may depend on whether you want to keep the i386 architecture or not. It's certainly possible to make a CPU core with few transistors: they did it back in the old days. In my (very limited) understanding, there are a few reasons why we have so many millions of transistors in modern processors:
  • parallel execution: the main way that processors have been getting faster is by executing multiple instructions at once in a pipeline. The trouble with this approach is that the programmer (well, the compiler) needs to get the same results as if the instructions were executed in order. The fact that this process is running out of steam is the reason we're adding cores in the first place. If you have 128 cores and you've solved the cache coherency problem, the shared memory bottleneck, and figured out how to write parallel programs, you don't need this anymore.
  • architecture cruft: the i386 architecture is backwards-compatible to before many programmers were born. Furthermore, it's a CISC processor; a RISC processor would be simpler. I've heard it said that a modern Intel CPU is basically a RISC processor underneath a translation layer. I don't know how true this is.
  • fancy instructions: the i386 also has a lot of fancy features for doing certain specific operations fast. Does a processor without the XOP instruction set or 3DNow! count as "full-function"?
    • including floating point math: division is hard. If you don't need it, or not all of your cores need it, so you can actually use all 128 cores, you can save a lot of space.
So, it depends. Paul (Stansifer) 19:25, 23 January 2011 (UTC)[reply]
Here's a picture of the Pentium III die layout (warning: 6MB TIFF file). I found it on this page, which has a lot of other interesting technical information about Intel CPUs. On-die cache is the biggest thing you forgot about. In addition to the data cache and L2 cache, I assume the region marked "instruction fetch" is mostly cache. Out-of-order execution (which isn't the same thing as pipelining) also takes a lot of die area. The execution units that do all the number crunching are a surprisingly small part of the total area. It's more or less true that x86 CPUs (Intel's and AMD's) decode the CISC instructions to RISC-like instructions, which are called micro-ops.
there are a few reasons why we have so many millions of transistors in modern processors - actually, the main reason is that we can, and that it's cheap. We have been, for quite while, been beyond "best performance per transistor". However, adding more transistors is very cheap, and so designers are willing to go even after small improvements. Paul's list contains some examples of the things people try. They are all good points, but both backwards compatibility and fancy instructions do not play a large role anymore - instruction decoding is only a small part of the logic nowadays, and backwards compatibility costs at most embedding an ancient chip somewhere in your modern core. By sheer number of transistors, cache is the biggest contributor, because it has a high impact, and is easy to design. --Stephan Schulz (talk) 12:53, 24 January 2011 (UTC)[reply]
By full-function, I intended a CPU core in which the full x86-64 Instruction Set is implemented. That includes both integer and floating-point operations.
@87.113.206.61 - A 128-core processor would need 16,256 full-bandwidth interconnects, but covering the entire die with interconnects should not be a problem, if the interconnects were layered on top of the grid of cores, which would essentially be a double-decker processor.
Rocketshiporion 00:05, 24 January 2011 (UTC)[reply]
Each core in the TILE-Gx100 is a processor — in the sense that each fetches and executes its own independent instruction stream. I think I have misunderstood what a core is in this discussion. The Tilera processors are also not coprocessors or application-specific, Quanta Computer will ship later this year servers with the TILEpro-64 running Linux for cloud computing applications. If you got the impression from Tilera's solutions page that TILE processors are coprocessors or implication specific, that is just one potential application. Processors have in the past been used as coprocessors, SGI in the 1990s used Intel i860s as geometry accelerators for graphics hardware. The same processor could be used stand alone. I'm sorry for digressing, I just thought I should clarify the Tilera's situation.
About the number of links a 128-core processor might need. What sort of network should such a processor have? If each core was linked to every other core by a one bit, bi-directional, 5 GHz link, then 16,384 wires are needed. Such a link would have a peak bandwidth of 625 MB/s. Since these sort of links usually do not have additional wires to implement the protocol that controls data movement, the same wire needs to carry protocol information, reducing usable bandwidth. One can get more bandwidth from today's consumer network hardware than such a link. For better performance, the number of wires per a link will have to be increased, to 16, for example. Then 32,768 wires are needed.
About covering the cores with these links — my (poor) understanding of VLSI design and technology says that this is a problem. To have a complex, high-clock rate Nehalem-class core, having plenty of interconnect levels is very important to ensure that the connections between the transistors can be made and kept short. My (poor) understanding is that process limitations mean that the interconnect levels closest to the substrate are the ones with the finest widths, and are therefore the most suitable for local routing because of its wide and resulting speed (as they have low capacitance). Simply having the links placed above the cores means that the interconnect levels with the thicker wires will have to be used. These are going to be power-hungry, not so fast (especially over long distances such as from one side of a die to another), and will be difficult to route as every core needs a link to every other core. Any one of these characteristics will make such a scheme impossible. Rilak (talk) 04:44, 24 January 2011 (UTC)[reply]
I did a bit of research on the Teraflops Research Chip, and it turns out that it only has 80 simple cores, not 128, and each core has two FPUs, so it would not meet any of your requirements. Also, I found out that the Tilera processor with more than 100 cores, it is actually due for 2013, not this year. My bad. Rilak (talk) 05:36, 24 January 2011 (UTC)[reply]
Larrabee cores support the full x86-64 instruction set. Apparently they were planning 48-core chips by 2010, but things haven't gone according to plan. -- BenRG (talk) 07:43, 24 January 2011 (UTC)[reply]

greenarrays.com has a chip with 144 independent FORTH processors, faster than heck but kind of hard to figure out how to use. If you mean x86-64 cores or something like that, it's not really practical with today's technology (there is an AMD 12-core chip though). 120 ARM cores on a die might be possible but it's not clear what it would be good for. 67.122.209.190 (talk) 05:44, 27 January 2011 (UTC)[reply]

What is the Typical IDE - and "virtual WIndows or Linux environment" - used to develop application code for Android-based mobile phones or iPhones?[edit]

I'm not familiar with how developers simulate the mobile phone environment for Android-based phones or iPhone to develop application: I'm trying to understand the "most commonly used" IDEs and "virtual environments" that developers would use to simulate the Mobile phone environment (for Android phones and for iPhones)to develop and test code for applications. Is there some commonly used Open-source systems for this? What about the most commonly used commercial SW systems used for this? —Preceding unsigned comment added by 71.198.81.56 (talk) 04:06, 23 January 2011 (UTC)[reply]

Google and Apple each publish an SDK that includes a phone emulator that can run software designed for the phone. The SDK and emulator are not open source and are specific to that mobile operating system (Android, iPhone, etc). Third party emulators are also available but they often not used by application developers Roberto75780 (talk) 04:49, 23 January 2011 (UTC)[reply]

For Android the SDK is available here[1] for Windows, Linux, or OS X. It can be run from the command line without an IDE, but plug-ins for the Eclipse IDE are available from Google.
For iPhone (iOS) you would use a Mac with OS X running the Xcode IDE, and write code in Objective C.[2]. More information is at iOS SDK. You have to pay Apple for the SDK, while the Android tools are free (though not open source). --Colapeninsula (talk) 11:08, 24 January 2011 (UTC)[reply]

Starting a program, wait a defined time period and then gracefully close in Windows[edit]

Resolved

Hi, while I can probably work this out myself I've tried quite a lot of searches and a few things and it seems a bit pointless spending a long time on something which is hopefully trivial for some here.

I'm looking for a way in Windows to start a program, wait a defined time period and then gracefully shut down the program (added bonus if it forcefully terminates if necessary after a defined time period). The exe is Chrome although Opera would probably be okay. (FF and IE take too long to start although I actually found something which works for IE since it's better integrated with VBS).

I can do the starting and waiting fine, my main problem is working out how to shut down. I could use taskkill but it seems there should be a way of handling it in the script since it started the program rather then relying on additional external programs. And even better bonus is if Chrome can be started minimised or in the background (and stay that way).

While I'm using VBS partially because I never installed any other scripting programs on this computer (as you may guess I don't script much), I'd welcome to code for anything else provided it will work on Windows without too much of an effort and is a language that isn't too hard to understand.

If you're wondering, I want to open a set of URLs one by one. I.E. quiting after waiting for an URL to load (because of SWF used in what I'm doing I'm not sure it will be easy to work out when the page has finished 'loading' so just waiting a defined period is okay) then opening Chrome again with the next URL (it will end up being a lot of URLs so loading them all simulatenously is out of the question). I believe I can work this part out myself so that isn't the issue. Of course I don't have to quit but it seemed the best and easiest bet to keep everything clean and Chrome loads fast. (I looked for an addon which could open a list of URLs one by one in the same window waiting a defined period between loading a new one but eventually gave up.)

P.S. Something like wget won't work because of the need to understand SWF. P.P.S. Going by the window name won't work because this can change.

Thanks Nil Einne (talk) 12:51, 23 January 2011 (UTC)[reply]

While I imagine there's likely to be a more graceful way of doing this (I'm not much of a programmer), have you considered an automation scripting program like AutoIt to perform this task? -Amordea (talk) 16:30, 23 January 2011 (UTC)[reply]
Thanks for the suggestion I did consider AutoIt for this briefly but wasn't that sure if it could do what I wanted. I was thinking I'd probably find some way to get the Chrome window to start minimised so this happens in the background and while this may still be possible in AutoIt (well worst case you can use shortcut keys I guess or probably some other program) I wasn't that sure since it's been a while since I used it (actually it's not even installed on this computer, downloaded a few weeks ago for something else but didn't end up using it). However I eventually decided just to try taskkill with VBS and while this worked, from my tests I realised what I was doing wasn't going to work as well as I hope. So I've decided to use the jython based Sikuli [3] (which I've been using for a related project) and stick with doing it in the foreground and using the graphical aspects to help determine when what I'm done has finished loading (I didn't consider it at first because I thought the graphical aspects were unnecessary and just added complexity). Nil Einne (talk) 17:43, 23 January 2011 (UTC)[reply]

Wikipedia Main page on newest Opera[edit]

I don't usually follow this desk, so I don't know if this has been asked before, and it's kinda hard looking for a possible answer through the archives, with the two search strings being Opera and Wikipedia - a lot of white noise there.

So here goes: a couple of weeks ago my Opera auto-upgraded from version 10.x to version 11.00, and I had to do some finetuning to get it back to how I want it. But there is one new thing that annoys me, but I can't figure out how to change it: the Wikipedia main page no longer accepts pressing the enter key as the "Go on now, start looking will you?" command. I have to type the search string into the search field and then grab the mouse and move the pointer to the "search" button, which is annoying. I thought maybe it was some new Wikipedia setting, but pressing "enter" to start searching works fine in FF. I tried looking on the Internet for some hints, but had the same white-noise problem I mentioned above. I got as far as figuring out that I probably need to change something on the about:config internal page of the program, but I have no idea what. Help? This doesn't happen for other pages (for instance, Google) or inside Wikipedia with the search field on the side (or rather, on top, I remembered now that I kept the one before last Wikipedia skin and the search option is on top in the newest one), only on the main page of Wikipedia. TomorrowTime (talk) 21:03, 23 January 2011 (UTC)[reply]

Opera has a long history of being buggy. ¦ Reisio (talk) 21:33, 23 January 2011 (UTC)[reply]

I use Opera 11 and the ENTER key works like a charm for me on the main page when I type in the search box.--Best Dog Ever (talk) 01:30, 24 January 2011 (UTC)[reply]

He means http://www.wikipedia.org/, it doesn't work. ¦ Reisio (talk) 01:37, 24 January 2011 (UTC)[reply]
Yes, that's what I meant. Any ideas if anything can be done about that or is it some sort of bug? TomorrowTime (talk) 04:06, 24 January 2011 (UTC)[reply]
Since Opera 11 seems to still be able to submit form data with the ENTER key on other sites, I'd say it qualifies as a website "bug", and so you might bring it up at WP:VP/T and/or http://bugzilla.wikimedia.org/. As I said, though, Opera has a long history of being buggy, and spending too much time debugging for it is not particularly wise, IMO. ¦ Reisio (talk) 08:36, 24 January 2011 (UTC)[reply]