This is the talk page for discussing improvements to the C data types article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
I was annoyed to see that a lot of the useful information on this page has been purged. Please compare: http://en.wikipedia.org/w/index.php?title=C_data_types&diff=prev&oldid=456503038
The reason was because it is available on Wiki-Books... Ok, but where is the link to the info? There are two complete *books* on that page, and I was looking specifically for the different pointer/array/function examples (pointer-to-pointer, pointer-to-array, how modifiers fit into that, etc).
If that sort of information doesn't belong on Wikipedia -- fine. But at least link to more information in WikiBooks if that is why the information is being purged.
68.54.162.15 (talk) 08:33, 30 June 2012 (UTC)
The following statements are not true:
- An int must be at least 16 bits long
- A long int must be at least 32 bits long.
- A long long int must be at least 64 bits long
A corrected statement would read:
If no one disagrees within a few days, I'll change those three sentences to similar statements as above and add a reference. --66.41.31.76 (talk) 08:19, 22 September 2008 (UTC)
int
nowadays (which is more than 16 bits). The current C standard says (emphasis added):5.2.4.2.1 Sizes of integer types <limits.h>
The values given below shall be replaced by constant expressions suitable for use in #if preprocessing directives. [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.
[...]
- minimum value for an object of type int
- INT_MIN -32767 // −(215 − 1)
- maximum value for an object of type int
- INT_MAX +32767 // 215 − 1
- maximum value for an object of type unsigned int
- UINT_MAX 65535 // 216 − 1
I absolutely agree with your response. However, since the definitions of the variables primarily provides clarification to novice programmers, there should be a clear difference stated between the amount of memory allocated and the range of values that are available to the variable.
For example, a novice could read the statement as saying that int x=1 is a value that is too small, and instead a value that is 16 bits, say int x=0x8000 or larger is necessary. With this in mind, I'm assuming one can see how confusion could result, as there is not an explicit divide made between memory allocation and the available range of values for a variable.
Perhaps a separate table should be introduced that explicitly states the range of values available for each of the variable types.
--66.41.31.76 (talk) 06:38, 24 September 2008 (UTC)
80.162.60.16 (talk) 12:46, 14 January 2012 (UTC)
Is not this sentences exclusive?
The only guarantee is that the long long is not smaller than long, which is not smaller than int, which is not smaller than short.
long long signed integer type. At least 64 bits in size.
213.247.248.50 (talk) 02:44, 10 June 2013 (UTC)
It needs to be removed. The people defending this know it's wrong because they purposefully misquoted it. It states word for word on page 505 under Annex E of both the C99 and C11 draft,
'ISO/IEC 9899:2011
The contents of the header <limits.h> are given below, in alphabetical order. The minimum magnitudes shown shall be replaced by implementation-defined magnitudes with the same sign.
The values stated below that are simply a template. Common sense, why would the standards committee require there to be a 64-bit data type when most processors don't even have 64-bit registers? All the standard says is that long long shall not be smaller than long, which shall not be smaller than int, which shall not be smaller than short, which shall not be smaller than char. The fact that most modern architectures use short/int/long/long long sizes of 16/32/32/64 does not make it the standard. It's perfectly natural for an 8-bit architecture like an 8080 to use short/int/long/long long sizes of 8/8/16/16. The purpose of wiki isn't to hold the hand of a person who can't understand "x is not smaller than y", it's to provide people with facts.
So frustrated right now... This error just wasted hours of my time trying to figure out how to make a C compiler for an 8-bit architecture conform to C11 and you guys have known this was wrong for over 4 years. — Preceding unsigned comment added by DaemonR (talk • contribs) 08:42, 9 September 2014 (UTC)
It's not true there are 4 basic types; that's just the way you've chosen to introduce them. The C90 standard has more like twelve. You don't mention several other built-in types.
C99 introduces long long, complex and imaginary types, and _Bool.
If you think it's worth covering types properly, I'm happy to help. Akihabara 02:55, 12 July 2005 (UTC)
static is not discussed, and should be. And isn't const also used by compilers when optimizing? If it isn't (i.e. they detect a lack of change on their own) that is interesting enough to mention, IMHO.
If other basic types beyond the basic 5 are mentioned, the time of their introduction (i.e. C99) should be specified. Most books have char, int, float, double, and sometimes void, and lumping "imaginary" in with them would confuse people.
UNIX has programs (cdecl and c++decl) for converting type declarations to and from English.
There is at least one more type: Boolean!
Although it is not possible to declare a variable of that type, the Boolean-type is essential to C! Without it, it would not be possible to write a decent if-statement.
ALbert Mietus // http:albert.mietus.nl —The preceding unsigned comment was added by 62.177.151.49 (talk) 07:41, 2 April 2007 (UTC).
Also needs to include struct, union and enum. http://en.wikipedia.org/wiki/C_syntax has some info on these. —Preceding unsigned comment added by 119.224.35.68 (talk) 00:32, 31 January 2010 (UTC)
The table in the Size section lists the typical size in bytes of a (long double) as "8 or 12". I propose changing this to "8, 12, or 16", as 16 is a very common sizeof(long double), as is the case for standard gcc x86-64 on GNU/Linux. --Kamalmostafa (talk) 22:11, 26 November 2010 (UTC) Custom types with typedef! —Preceding unsigned comment added by 88.165.171.247 (talk) 13:57, 24 March 2011 (UTC)
There are common keywords used together with variable declarations, this includes "static", "volatile", "extern", "auto", "register". Would be nice if someone knowledgeable about all the quirky details covered all of them! 130.237.57.80 (talk) 15:54, 9 March 2011 (UTC)
inttypes.h is said to have been merged to this page. However, this page does not seem to have enough information on inttypes.h at least as much as the original page contained. Please fix this.
Jobin (talk) 05:38, 22 October 2011 (UTC)
There is a move discussion in progress which affects this page. Please participate at Talk:C standard library - Requested move and not in this talk page section. Thank you. —RM bot 09:40, 8 November 2011 (UTC)
It is not mentioned here at all. :-( Even if you cannot declare objects of type void
, this type is used to build-up other types, e.g. pointer or function types. --RokerHRO (talk) 21:47, 26 March 2012 (UTC)
The void type comprises an empty set of values; it is an incomplete object type that cannot be completed.
— Preceding unsigned comment added by 81.186.243.41 (talk) 15:37, 26 January 2013 (UTC)
The ISO C standard describes 3 categories of types:
Than it describes the arithmetical types (_Bool
, integer and floating point types), the derived types (arrays, structures, unions, pointer and function) and finally the so-called "type modifier" (const, volatile, restrict, _Atomic).
Unfortunately there is no clear structure or categorization in the ISO standard. Perhaps there is one in other notable C books, if yes we should use one. What do you think? --RokerHRO (talk) 13:46, 10 June 2015 (UTC)
Type with "anam" were added. But I found nothing about it after a quick search. It seems to be bullshit. A source would be appreciated, otherwise I think that this weird potential type should be removed. — Preceding unsigned comment added by RyDroid (talk • contribs) 17:31, 20 November 2016 (UTC)
It may not be standard C but is it worth talking about it anyway? It is a type of 24 bits for some embedded systems. — Preceding unsigned comment added by 69.70.87.250 (talk) 19:41, 10 August 2017 (UTC)
int24_t
. Vincent Lefèvre (talk) 21:31, 10 August 2017 (UTC)There are still some types not mentioned in the article which would merit it IMHO like off_t and time_t, which are at the same time
--212.185.199.2 (talk) 15:31, 12 February 2018 (UTC)
The draft for C2x includes requiring 2's complement. The future is now. http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2412.pdf
Maybe this should be noted in the section about int ranges. Wqwt (talk) 07:28, 15 October 2021 (UTC)
@Johann-Tobias Schäg I have reverted your edit about the signed integer sizes. What was written was correct, since as the note (C data types#cite note-restr-5) says this is exactly the range guaranteed by the C standard. Twos complement is not guaranteed but is drafted for C2x. Wqwt (talk) 08:18, 9 August 2022 (UTC)
In the table that lists and describes the data types, couldn’t there be a column with examples of data types so newcomers to C don’t get confused or mix any of them up? Like, obviously a char would be only a single character like ‘c’, ‘S’, ‘3’, or ‘#’. But a float, is it strictly a ~number~ in the decimal place (0.75), or can it be greater (6.432)? See what I mean? 98.216.67.148 (talk) 02:14, 1 October 2022 (UTC)
This article was the subject of an educational assignment supported by Wikipedia Ambassadors through the India Education Program.
The above message was substituted from ((IEP assignment))
by PrimeBOT (talk) on 19:58, 1 February 2023 (UTC)
The article should really include additional historical context about why these types are specified the way they are, as modern readers may not be familiar with the multiplicity of byte widths, word lengths, and sign representations that existed in the 1980s when X3J11 was trying to specify a set of data types that was common to historic PDP-11 and microcomputer implementations, 32-bit and 36-bit mainframes, and supercomputers. By comparison, many newer languages provide only bit-precise types (and have no support for, e.g., 9-bit characters, because 36-bit architectures are now historical relics). 207.180.169.36 (talk) 18:41, 19 November 2023 (UTC)
Yes, I saw the open-std pdf saying that LONG_MIN equals to -2147483647. But this PDF is a draft (and a draft can contains errors). The minimum value of a n bits signed integer (signed bit excluded) is in decimal - 2^n, so -2147483648, not -2147483647. I don't care what the open-std pdf says, it's wrong in my opinion. If you set the minimum value to -2147483647 and not to -2147483648, then you have one too many binary values, which is 10000000 00000000, what would you do with such a binary ? have two zeros one positive and one negative ? if so why not three or four ? come on, I hope you are not stupid people.. . So this open-std pdf doesn't seem to be without error, thus wikipedia shouldn't use it as a primary source of information. — Preceding unsigned comment added by 163.5.23.68 (talk) 17:05, 13 March 2024 (UTC)
The chart of types sizes lists 'int' as 16 bit (between -32,767 and 32,767), in almost all places int is 32 bits (I have never seen it as 16 bit but I assume it is in older versions?).
If it is really mostly 16 bit, I think a comment "32 bit on some systems" or something like this can be added, or even change it to "16/32" (and change the explanation accordingly) Eylon Shachmon (talk) 01:41, 18 April 2024 (UTC)
I was just wondering which section of the Standard specifies the minimum values of the unsigned integer types to be $0$?
The closest I could find was §6.2.6.2 Integer types on [page 41](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf#page=60) of the C23 working draft, which says:
> If there are $N$ value bits, each bit shall represent a differentt value of $2$ between $1$ and $2^{N-1}$.
Is this the correct section? 115.188.121.25 (talk) 03:40, 18 June 2024 (UTC)