Results 1 to 7 of 7

Thread: byte

  1. #1

    byte

    why actually did they first form 8 bits=1 byte?why didn't they keep some 20 bits= 1byte or something like that
    when floods come fishes eat ants
    when floods go down ants eat fish
    life is like that........

  2. #2
    Senior Member
    Join Date
    Aug 2002
    Posts
    310
    I'm not sure what the reason is,but I do know that 20bits to a byte would make conversion numbers a lot bigger and harder to work with.That might be part of the reason.If an could you imagine the binary IP address 11001011011100011011.01110101110011000101.00011101001001110111.11110001110011001111
    That could get a little crazy for network admins.

  3. #3
    Senior Member
    Join Date
    Jun 2002
    Posts
    405
    When ASCII was created, not all computers had bytes. Some used five bits for each text element (that is, they used five-bit bytes), some six, and some eight, and some handled text only by workarounds, if at all. However, the creators of the standard judged - correctly - that before long the eight-bit byte would become ubiquitous.

    Now, with one bit you can represent two values, 0 and 1; with two bits you can represent four values; with three bits you can represent eight values, and so on; the total doubles with each additional bit. With seven bits you can represent 128 values (0 through 127, the way it's usually done), and with eight bits you can represent 256.

    The people creating ASCII were defining their own future, and nobody wanted to use up all eight bits right at the outset. For one thing, some equipment needed to use one bit for error checking; for another, 128 values could cover all the characters and functions everyone considered absolutely indispensable; and finally, no model for using all 256 values had yet appeared. Formal, non-proprietary standards are nothing if not flexible, because they must be approved by competing vendors and extremely demanding, large-volume customers. The creators of ASCII used seven bits, specifying that vendors could use the eighth bit any way they wanted, in the full expectation that some vendor's scheme would become widespread enough to be adopted as the next standard. (They also specified that the control codes could be used in any way appropriate to the operational context and the communicating devices - and that, my friends, is what made possible the WordStar keyboard command set.)

    Eventually DEC's Multinational Character Set became the preferred eight-bit encoding method in Europe, and in 1988 it was adopted almost unchanged as ISO 8859-1. It has all the ASCII assignments in its lower half -- the values 0 through 127 -- and the characters that the European market wanted in its upper half -- the values 128 through 255 - the half purposely left unassigned by leaving the eighth bit free in ASCII.
    http://www.petrie.u-net.com/computin...s/asciirtf.htm

    Hope that helped you out

  4. #4
    I might be guessing here (I am learning programming too )

    The reason why 8 bits=1 byte:

    A computer represents things in binary (1's and 0's) and is either on or off. Think of it like a switch. Bits are lumped together to make more complex code, and the reason why it is this way is because ANSI (or is it ASCII) has 256 (I am trying to remember here) character representation combinations.

    However, some of the other programming gurus like rioter or MsMittens might be of better assistance to you

    So that is why you also see why hard drives, and other storage devices express everything is bytes, Kb, Mb, Gb and stay tuned for Tb. Also, to make binary math easier, progammers also devised a little system called hexadecimal (like 21h) to represent memory addresses and the like. I suggest you look at an EXCELLENT tutorial on data representation for x86 assembly here:

    http://webster.cs.ucr.edu/Page_asm/A...ml#HEADING1-30

    www.google.com is also chock full of information.

    Hope this helps.

  5. #5
    AO Curmudgeon rcgreen's Avatar
    Join Date
    Nov 2001
    Posts
    2,716
    Digital Equipment Corp. used to make computers whose
    memory was arranged in 12 bit units. If I'm not mistaken,
    these machines were the first ones to run UNIX. The legacy
    of this lives on, in the fact that many programs accept
    arguments stated as octal numbers.

    Eight bits makes more sense though, because it is a product
    of doubling. 2x1=2,,,2x2=4,,, 2x4=8
    I came in to the world with nothing. I still have most of it.

  6. #6
    Senior Member
    Join Date
    Jul 2002
    Posts
    339
    Praveen: Go to Google (as you should have), type "byte definition", hit [I'm Feeling Lucky].

    Historically, 1 byte was not 8-bits. On modern architectures a byte is nearly always 8 bits. By "modern" I mean since 1956...

    Peace always,
    <jdenny>
    Always listen to experts. They\'ll tell you what can\'t be done and why. Then go and do it. -- Robert Heinlein
    I\'m basically a very lazy person who likes to get credit for things other people actually do. -- Linus Torvalds


  7. #7
    One thing to keep in mind is that in the early days of computers (as we know them anyway) the registers in a CPU were very small. In fact, the CPU that I first learned assembly programming on was an Intel 8088 and the registers in that chip were only 8 bits wide. Not to mention there were only 4 registers that you could really use for processing at one time, so the reality was that a byte couldn't be any more than 8 bits wide and you couldn't process but one bit in a register at a time, or have more than 4 bytes in the main CPU at once. (In reality you could, but you could get yourself into trouble really easily, so that's a topic best left for another day)

    As CPU's have developed they have increased in size from the 8-bit models that I first worked with to the current 64-bit to 128-bit models that are buzzing around today. With the expansion of the width of registers in a CPU came an increase in the the number of registers. So the increase in the width of the registers has allowed the number of bits used to make up a piece of data to increase. Take Java's unicode representations for instance. All characters are represented as 16 bits in Java. The current ASCII representations are accounted for in the lower 7 bits, but by increasing the number of bits available Java is able to use all 16 to represent virtually every character in every language.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •