What I would recommend is that you don't stick to any IDE's per say. Let your code stay as flat files in the filesystem, and use independent tools (Emacs, the GCC toolchain, Ctags, etc) you have to for implementing the operations you have to implement. That will keep your codebase IDE-independant and free of clutter that surrounds it (like .project files, etc).
This is just a partial answer, since your question is fairly broad.
C++ defines an "execution character set" (in fact, two of them, a narrow and a wide one).
When your source file contains something like:
char s[] = "Hello";
Then the numeric byte value of the letters in the string literal are simply looked up according to the execution encoding. (The separate wide execution encoding applies to the numeric value assigned to wide character constants L'a'
.)
All this happens as part of the initial reading of the source code file into the compilation process. Once inside, C++ characters are nothing more than bytes, with no attached semantics. (The type name char
must be one of the most grievous misnomers in C-derived languages!)
There is a partial exception in C++11, where the literals u8""
, u""
and U""
determine the resulting value of the string elements (i.e the resulting values are globally unambiguous and platform-independent), but that does not affect how the input source code is interpreted.
A good compiler should allow you to specify the source code encoding, so even if your friend on an EBCDIC machine sends you her program text, that shouldn't be a problem. GCC offers the following options:
-finput-charset
: input character set, i.e. how the source code file is encoded
-fexec-charset
: execution character set, i.e. how to encode string literals
-fwide-exec-charset
: wide execution character set, i.e. how to encode wide string literals
GCC uses iconv()
for the conversions, so any encoding supported by iconv()
can be used for those options.
I wrote previously about some opaque facilities provided by the C++ standard to handle text encodings.
Example: take the above code, char s[] = "Hello";
. Suppose the source file is ASCII (i.e. the input encoding is ASCII). Then the compiler reads 99
, and interprets it as c
, and so on. When it comes to the literal, it reads 72
, interprets it as H
. Now it stores the byte value of H
in the array which is determined by the execution encoding (again 72
if that is ASCII or UTF-8). When you write \xFF
, the compiler reads 99 120 70 70
, decodes it as \xFF
, and writes 255
into the array.
Best Answer
2.6.32-29: 2.6.32: base kernel, -29 final release by ubuntu
2.6.32-29.58: 2.6.32: base kernel, -29.58 ongoing release (-29) by ubuntu
2.6.11.10: 2.6.11: base kernel, .10 tenth patch release of it. (2.6.11 was chosen by volunteers (read Greg KH) to be a "long term maintenance" release).