Of course there is a precision loss. You have 1 bit for sign, which is effectively same as int, though it works differently. There are 8 bits for exponent, which is what lets you increase the range. And that only leaves 23 bits for mantissa, which provides you with precision. This translates to roughly 10^-8 of precision. But unlike integer, where precision is fixed, floating point precision is relative. So it's 10^-8 of whatever value you are working with.
Double precision does, in fact, double precision. It's good to about 10^-16. Problem is, not all compilers and not all hardware supports double precision natively, so using double can come at cost of performance. Furthermore, graphics hardware runs with single precision floats, so if you are making a game engine, you'd almost always try to make it work without using double. For scientific work, double is often necessary, and is in fact the default precision in Fortran, Matlab, Mathematica, etc.