How does the computer differentiate among signed and unsigned numbers? Supposedly there’s something called two’s complement notation in which if the HO bit is one than the number is negative and if it’s zero than it’s positive.

Therefore 8000h or 1000000000000000 are negative since there’s one at the end. But how does the computer know if we are trying to represent smaller negative number or just really big positive number.

Supposedly there’s a difference and you can convert from positive to negative two’s complement form by inverting all the bits and then adding one. Doesn’t that just give you another positive number? Can someone pleas clear this up for me?