Computing desk
< January 9 << Dec | January | Feb >> January 11 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 10

3g video calling architechre[edit]

hi,

i need to know about the 3g video calling with the call routing architecture. please let me know at the earliest. —Preceding unsigned comment added by 182.71.230.217 (talk) 08:18, 10 January 2011 (UTC)[reply]

how to Assign Structure Address[edit]

Hi,

I have doubt in C structure. That is how to assign structure address manually not default. Just i write one example structure,

struct ADC
{
 int A;
 int B;
}REG1,REG2;

when i compile the above structure defualt address in assigned like 8100,8101.. somethinglike that. I am using 16-bit processor. so that same code i want to use my address like 8000,8001,8002,8003... I know very well structure memory is stored in contiguous memory location. so that how to assign structure(ADC)starting address is 8000. Can anyone help me to solve this issue?

THANKS & REGARDS, M.ANTONY PRABHU

Try this:
struct ADC
{
 int A;
 int B;
} *REG1 = (struct ADC *) 0x8000;
-- BenRG (talk) 14:01, 10 January 2011 (UTC)[reply]

Is it possible to do without pointer? Is possible to use preprocessor to assign address? —Preceding unsigned comment added by 192.8.222.82 (talk) 06:40, 11 January 2011 (UTC)[reply]

Your compiler may have a special syntax to do what you want. Check the documentation. Here's an example. -- BenRG (talk) 09:58, 11 January 2011 (UTC)[reply]

cmd capture[edit]

Is there an easy way to capture and save an image from a webcam via the command prompt in Windows? 82.44.55.25 (talk) 15:28, 10 January 2011 (UTC)[reply]

See if these help? [1] [2] F (talk) 03:02, 13 January 2011 (UTC)[reply]

Number of unique colours in a digital photograph[edit]

I would like to identify which of these images http://www.google.co.uk/images?hl=en&expIds=0&xhr=t&q=flowers+spring+road&cp=18&um=1&ie=UTF-8&source=og&sa=N&tab=wi&biw=1024&bih=609 is closest to the original photograph. Inspecting them with Irfanview, they differ in their number of unique colours. Am I right in thinking that the image with the highest number of unique colours is most likely to be closest the original photo? I expect that when they are reduced in file size as preperation for including them on a website, the reduction in file size includes reducing the number of unique colours. Thanks 92.15.21.144 (talk) 15:35, 10 January 2011 (UTC)[reply]

Reducing the dimensions of a picture nicely (i.e. not using nearest-neighbour sampling) will potentially increase the number of unique colours, since nearby pixels are averaged, although it will also potentially decrease the number of unique colours simply because there are fewer pixels available to be any colour at all. Scaling a picture up nicely will likewise increase the unique colours, though it will be more blurry. These images are jpegs, and the way to reduce the file size of a jpeg is to increase the compression. In a palette-indexed image such a GIF or certain kinds of PNG, the number of colours can be reduced directly to make the file smaller. Jpegs don't work that way. They produce small file sizes for images containing smooth gradients, for instance. Their compression is achieved by reducing the level of detail (frequency) of the image. It might incidentally be the case that a more compressed jpeg contains fewer unique colours than a less compressed one, due to the loss of colours contained in some small details, but I wouldn't rely on it. 81.131.24.192 (talk) 16:02, 10 January 2011 (UTC)[reply]
I know this is nitpicking, but I believe the word you want is "distinct" or "different," not "unique." The word "unique" means "one-of-a-kind." —Bkell (talk) 16:47, 10 January 2011 (UTC)[reply]
You're absolutely right - that is nitpicking. :D Seriously, "unique" gets used that way all the time, and the OP is just quoting from a dialog box displayed in Irfanview. The colours are one-of-a-kind if we limit the context to the image. The opening words of Unique identifier are "With reference to a given (possibly implicit) set of objects ..." 81.131.33.9 (talk) 18:04, 10 January 2011 (UTC)[reply]
Yes, I know that the word "unique" is used that way all the time, but "unique" doesn't mean "distinct." :-) The colors in an image aren't unique—each one is probably assigned to many different pixels. A unique identifier is assigned to one and only one object. That's what makes it a unique identifier. —Bkell (talk) 06:06, 11 January 2011 (UTC)[reply]
Oh, fair point. *Strokes chin* 81.131.69.118 (talk) 06:36, 11 January 2011 (UTC)[reply]
Possibly depends upon if you mean "unique pixels" or "unique colours" for the set of pixels. 92.24.181.78 (talk) 15:07, 11 January 2011 (UTC)[reply]

So the "number of unique colors" to quote Irfanview, not my choice of words, is not any use in trying to guess which jpg image has been the least mucked around with? I was thinking of suggesting the ratio number-of-different-colours divided by number-of-pixels as an approximate metric or proxy for the quality of an image, but that's out.

Is there any way of deciding objectively which is the least-blurred best quality image? Thanks 92.15.3.168 (talk) 20:27, 10 January 2011 (UTC)[reply]

The "less blurred" image will have more high frequency data. You can perform a 2-D fourier transform, using free software like GNU Octave, to analyze that. However, you should know that digital image manipulation can be very complicated. It could add or remove unique colors, high frequencies, and so on. So strictly from these metrics, it's not really possible to say which image is closer to the "source." If we had a known, canonical source image, we could estimate which of the other images are more processed than others by comparing to the original; but it's not generally possible to directly conclude which image is the original based solely on statistical analysis of image characteristics. Nimur (talk) 22:40, 10 January 2011 (UTC)[reply]
The largest image you find has the best chance of most accurately matching the original. In an ideal case, an image would have EXIF metadata attached to it that would tell you how it was created, but none of the ones there that I looked at have any. Looie496 (talk) 22:56, 10 January 2011 (UTC)[reply]
Not really, people often increase the size and you just get a large but blurry image. 92.15.3.168 (talk) 00:47, 11 January 2011 (UTC)[reply]

I will have to try an experiment with increasing and decreasing the size of a test image to see what effect that has on the number of different colours and also the ratio described above. 92.24.190.219 (talk) 00:11, 12 January 2011 (UTC)[reply]

Non-English glyphs in programming[edit]

Do they get any use? Plenty of common keywords (e.g. continue) are English, but does that mean the convention is to write in English throughout, even if you come from, say, Sweden, or would it be normal for such a programmer to name a function " räkna() " ? (I particularly want to know about the use of glyphs with diacritics, like ä, not the use of non-English words.) 81.131.33.9 (talk) 18:15, 10 January 2011 (UTC)[reply]

Either the compiler/interpreter can parse them or it can't. Easy to find out on your own. ¦ Reisio (talk) 18:16, 10 January 2011 (UTC)[reply]
That's not really my question - I'm asking about common usage. It's because I'm designing a font, and want to know whether to bother putting them in or not. Edit: actually I know what to do - of course they must go in, because even if they're not used in identifiers, they'll still appear in various string literals. (Mind you, I'm still kind of curious about the original question.) 81.131.33.9 (talk) 18:20, 10 January 2011 (UTC)[reply]
Yes, the convention is to write in English throughout, except in a few rare non-English-based programming languages. Marnanel (talk) 18:38, 10 January 2011 (UTC)[reply]
Actually, in my (admittedly rather limited) experience, programmers won't hesitate to write identifiers (and comments) in their native language, though they will omit the diacritics (because usually these are not allowed by the language specification). Source code intended to be publicly released to the wider world is generally an exception.—Emil J. 19:05, 10 January 2011 (UTC)[reply]
Fantastic find :) 81.131.33.9 (talk) 19:32, 10 January 2011 (UTC)[reply]
Recently-designed languages like Java explicitly allow non-ASCII characters in identifiers and string literals (and of course in comments). Modern compilers for older languages like C understand non-ASCII encodings, but people tend to avoid them for portability reasons. Comments are likely to contain non-ASCII characters in any case because compilers that don't understand character encodings will usually skip over unknown bytes in comments. I've seen a fair amount of code with Japanese (Shift-JIS) comments and everything else in ASCII. -- BenRG (talk) 21:33, 10 January 2011 (UTC)[reply]
I'm not a programmer, but I do write some code occasionally, and I sometimes (but not too often) use my native language - for one thing, it is less likely that a given word I want to use have a reserved meaning in the programming language I use (which I tend not to be perfectly fluent in). But I never ever use native letters - I think most computer users from Scandinavia at least get used to replace the native letters (jørgen --> jorgen) because it leads to far less problems, even if the language / operating system supposedly supports them. There's always a complication someone hasn't thought of. For programming languages, it could for example be the encoding of the relevant text file. Jørgen (talk) 19:31, 10 January 2011 (UTC)[reply]
That's valuable anecdotal information, thank you. 81.131.33.9 (talk) 19:39, 10 January 2011 (UTC)[reply]
Java allows arbitrary Unicode as its input, though it also provides an escape sequence mechanism to represent arbitrary Unicode characters without having to actually include them in the source (where they might cause grief for text editors, printers, etc.). C99 allows similar escapes (that it calls "universal character names"), and allows implementations to interpret identifiers and strings with whatever encoding they like. Python requires its input to be ASCII but allows a "coding declaration" to affect the interpretation of string literals only. So whether non-ASCII characters are common in source code (literally rather than by escape sequences) surely depends on which programming community you mean. It is also undoubtedly true that programmers concerned about the confusion that can arise when non-ASCII code is shared between different systems (some of which might not be 8-bit clean, or might assume different encodings) will prefer ASCII even when writing in other languages. --Tardis (talk) 21:18, 10 January 2011 (UTC)[reply]
Haskell users seem to be considerably more willing to stray far from ASCII than do programmers in other languages (its native input is, like Java's, natively unicode - I think utf8). In particular, it's fairly common to define operators using characters from the greater mathematical alphabet - so e.g. you get code like test "-a+b*c≡d⊕!(c∧d)" (source). Equally, as a purely functional language beloved of academics, it's often written as if its code really was mathematical notation, so one often sees non-ascii character (particularly Greek) used, much as they are in math in general. 87.115.125.162 (talk) 21:37, 10 January 2011 (UTC)[reply]
Same for Racket (I think there are probably other Schemes that support Unicode). I recently discovered the tex input-mode in Emacs and went a little crazy in my last project. Paul (Stansifer) 22:50, 10 January 2011 (UTC)[reply]

What is the efficiency of human sorting?[edit]

...or phrased slightly differently, what sorting algorithms do humans tend to use, and what are their efficiencies? I suspect the answer will depend a lot on the items that are to be sorted, so I'll present the example I'm particularly interested in, the sorting of sample tubes at a laboratory. The tubes are numbered (long numbers, say 9 digits). The numbers come in different series with the same three leading digits (six series, say), and may or may not arrive partially sorted. Sometimes they arrive in racks where each rack is partially sorted. I read the article Sorting algorithm, which states that a good algorithm has a complexity of and a bad algoritm is .

How will a human perform, in the setting I described? Guesstimates are welcome, pointers to relevant empirical studies even better.

Thanks, --109.189.66.11 (talk) 19:59, 10 January 2011 (UTC)[reply]

When I've taught algorithms class, I task 3 to 4 students with sorting short, medium, and long lists of items (sometimes numbers, sometimes words, sometimes sort on height or size). My experience is that humans use an insertion sort or bubble sort for short lists. When the list gets long, they tend to use a bucket sort to get it partially sorted and then an insertion sort on the shorter lists. There is a problem with identifying efficiency in computer standards. A human is capable of viewing more than one item at a time. So, insertion sort on small lists is O(n). For example, sort 4, 9, 2. You can see all three numbers at once and instantly note that 2 is the smallest. Then, you can see both 4 and 9 at the same time and note that 4 is the smallest. That leaves 9. A computer would have to look at each number, one at a time, to see that 2 is the smallest. Then, it need to look at the 4 and 9 separately to note that 4 is the smallest. If a computer was capable of looking at more than one number at once and perform a comparison on all numbers against all other numbers all at the same time (the way humans do), sorting would be much faster in computers. -- kainaw 20:23, 10 January 2011 (UTC)[reply]
(EC) Maybe it's obvious, but I'll point out that many of the linked sorting algorithms are useful for humans sorting physical objects too. For instance, I've used a truncated merge sort for sorting student assignments. I imagine it would work well for your tubes, provided you have plenty of cache space. SemanticMantis (talk) 20:30, 10 January 2011 (UTC)[reply]
Thanks! Bubble sort and insertion sort are O(n2), merge sort is O(n log n). If the number of samples is doubled, is it safe to assume that the number of man-hours needed to do the job should be somewhat less than the square of what it was before the number of samples was doubled? --109.189.66.11 (talk) 20:44, 10 January 2011 (UTC)[reply]
As Tardis said below, those big-O times are based on assumptions about the speed of the primitive operations that don't necessarily apply to sorting of physical objects. -- BenRG (talk) 22:48, 10 January 2011 (UTC)[reply]
(edit conflict) If you know that there are only a small number of "series" (six, you suggested), you'll use bucket sort or one of its many variants (MSD radix sort or postal sort or so). If sorting something like a deck of cards where the total is known and every card's position in the result is known, you can use a trivial tally sort that barely counts as a sort at all. These are not comparison sorts and so the usual rule (namely, that all good sorts have that complexity) doesn't apply.
One trick that humans can often use is that shifting physical objects can be much more efficient than shifting data: you can shove a line of books down a shelf to make room for another in their midst, and if you're strong enough it doesn't take any longer no matter how many you're moving. This makes insertion sort pretty efficient in the physical world even though it's a "bad" algorithm in the standard sense. Heuristics allow that to be applied even when there are multiple shelves each of a fixed size: "allocate" more shelves than will be needed in the end, and then almost all of them will retain some empty space throughout so that you can do the shove-insert trick. Then you can pack the shelves (if desired) at the end in linear time.
Anecdotally, I recently wanted to sort about 200 CDs into racks that had discrete slots (so that I really had to treat them like a computer would); I very consciously used merge sort between various piles of CDs on the floor (with the last merge "writing" into the racks). Of course, since I know the standard sort algorithms, perhaps I'm not useful evidence for what "humans tend to use".
You might also be interested in physical computation (which is unfortunately a redlink): algorithms that rely on physical processes other than the usual read/write/address of the Turing machine. Quantum computing gets lots of attention, but there are surprisingly efficient ways to use even the classical world to do what you might call computation: spaghetti sort, for instance. Unfortunately, I'm having trouble finding other good references on physical computation (variously called {physical|analog} {computation|algorithms}). --Tardis (talk) 21:00, 10 January 2011 (UTC)[reply]
Another factor to consider is whether a single human or more will be doing the sorting. The multiple human case mimics parallel processing in computers. For example, one person could do a bucket sort, as Tardis said, where they divide the pipes up into, say, 6 series of pipes. Others could then sort those buckets, by using an insertion sort, and wouldn't need to wait until the first person finished his sorting. If there's just one human doing the sorting, though, perhaps an insertion sort right away makes sense (although there may still be 6 lines ("buckets") of pipes laid out, but he would put each in it's proper position in each "bucket", immediately, and not just pile them up for later sub-sorting). StuRat (talk) 21:47, 10 January 2011 (UTC)[reply]
I remember reading somewhere that most humans did insertion sort methods on lists up to a certain size, and then switched to various divide and conquer style sorts. This book (ISBN 0596155891 page 208) says "Humans naturally resort to a divide-and-conqueror algorithm" but it's referring to a specific kind of problem. I'm having trouble finding papers on that, but I'm sure someone's researched that question. Shadowjams (talk) 00:27, 11 January 2011 (UTC)[reply]
Thanks, everyone! --91.186.78.4 (talk) 09:24, 11 January 2011 (UTC) (OP from different PC)[reply]

LC220EM1 emerson does not turn on[edit]

We have an Emerson LC220EM1 from Walmart. It was working fine until last night. This morning, it just quit working. The power button does nothing and I just don't see any signs of life in it. Any ideas on what I can do for troubleshooting? Thank you much. Kushal (talk) 21:04, 10 January 2011 (UTC)[reply]

For anybody else who is wondering, that model is a '22" Class LCD 720p 60Hz HDTV'. --LarryMac | Talk 21:12, 10 January 2011 (UTC)[reply]
Thank you, LarryMac. I can't find the receipt although it has only been a few months since we bought it. Kushal (talk) 21:23, 10 January 2011 (UTC) [reply]
The Emerson website links to the Funai website, which offers this handy tip - "Try unplugging the unit for about 5-10 minutes, then plugging it back in. You may also want to try a new outlet. " and now I feel like part of The IT Crowd --LarryMac | Talk 21:26, 10 January 2011 (UTC)[reply]
Thank you. I tried it with no success. Any other ideas? Kushal (talk) 23:50, 10 January 2011 (UTC)[reply]
No receipt at all? Even so, it should still be covered by the manufacturer's warranty (usually a year for things like TVs). Try talking to the store first, but failing that contact the manufacturer (or their representative in your country). That said, if you can definitely diagnose the problem as being with the on/off switch and not somewhere else in the power supply chain, they are quite easy to replace yourself, if you can get the part and are handy with a soldering iron. Astronaut (talk) 12:30, 11 January 2011 (UTC)[reply]
I'll try talking to Walmart tomorrow. I didn't get the tv myself. It was a rather impulse buy (and I should probably stop there). Any ideas how I can contact Emerson's representative for the US? Thanks Kushal (talk) 02:53, 12 January 2011 (UTC)[reply]
If you paid by credit/debit card, try taking a copy of your bank statement showing the purchase. It at least shows that you bought something of the same value as the TV from them when you say you did. CS Miller (talk) 20:50, 12 January 2011 (UTC)[reply]
I will try that. Kushal (talk) 18:13, 14 January 2011 (UTC)[reply]

Comprehensive Scrabble word list needed[edit]

I would like an official Scrabble word list I can download. I did a Google search, but only found partial dictionaries, such as those excluding naughty or long words. My requirements:

1) Should be the North American version of English word spellings.

2) Should be free.

3) Should ideally be sorted alphabetically, but I can sort them myself, if I must.

4) Should be a readable text file, ideally, with one word per line (I don't want various forms of the word on the same line, for example).

5) I would think it would all fit in one large file, but I could also take, say, 26 files (for words starting with each letter).

I would like to write my own Scrabble solver, and need this to get started. Also, there's a game similar to Scrabble called QWERTY (Pogo has it), and I'd like the same thing for that, since they use a slightly different dictionary. Thanks for your help ! StuRat (talk) 21:30, 10 January 2011 (UTC)[reply]

How about this one? [3] - it's not a raw text file with one word per line, but turning it into one shouldn't be very hard. (I found this via the search string "aardvark aback abacus abaft abalone abandon abandoned abase abash abate abatement abatis abattoir abbacy abbe abbess abbey abbot abbreviate abbreviation abdicate abdomen abduct abeam abecedarian".) 213.122.53.182 (talk) 22:31, 10 January 2011 (UTC)[reply]
As for point 1, British spellings are allowed even in North American Scrabble. If you intend to disallow "COLOUR", you're more strict than any official rule set.
If you google "TWL06", you should get what you need, including a link to the wikipedia article Official Tournament and Club Word List which is another way to find it. 67.162.90.113 (talk) 22:35, 10 January 2011 (UTC)[reply]
Yes, that worked. I found this, which seems to be just what I need. Thanks. StuRat (talk) 23:27, 10 January 2011 (UTC)[reply]
There's some listed here [4] Aspell's English dictionaries are listed here [5] - uncompress the file (On MS-Windows 7-zip should be able to handle it), and extract the .cwl files. You'll need the aspell tools to uncompress the wordlists - see the readme in the downloaded file. CS Miller (talk) 20:48, 12 January 2011 (UTC)[reply]
http://www.hasbro.com/scrabble/en_US/search.cfm ¦ Reisio (talk) 01:41, 11 January 2011 (UTC)[reply]
That is a rather basic Scrabble solver, where you enter your letters and it returns all words that can be made from them. I'd like to add more capability:
A) Allow blanks to be specified.
B) Allow some letters to be anchored, so they must be in a given position, such as the first or last letter in the word.
C) Calculate points for each word, and sort them from most points to fewest. This requires adding both point info for letters and special squares on the board.
D) Eventually add capability to input current state of the board and find the best move.
Of course, such programs already exist, but I'd like to write my own as a coding exercise. StuRat (talk) 18:20, 11 January 2011 (UTC)[reply]

Free, downloadable QWERTY (game) word list ?

Now, how about the same, for the game of QWERTY ? StuRat (talk) 23:28, 10 January 2011 (UTC)[reply]

Isn't it just Scrabble without Hasbro's intellectual property (the name 'Scrabble')? Don't waste your time. Anyone playing QWERTY would rather be playing Scrabble. ¦ Reisio (talk) 01:41, 11 January 2011 (UTC)[reply]
It seems to use a different dictionary, and the rules are quite a bit different, too. StuRat (talk) 18:13, 11 January 2011 (UTC)[reply]