There are 2 differences. You let the compiler do the automatic type conversions while in my code I force it to convert to the type that is needed. Automatic conversion works in most cases, but it's a better practice to tell the compiler what you want exactly. The second difference is in the result on the screen. Your calculation displays 1 decimal if the result is not integer and no decimals if it is. If you use the format string "Fn" (where F is for floating point and n is the number of decimals) then it will display all the wanted decimals even if they are zero. I don't like numbers jumping back and forth depending on the number of decimals, but it's me. Of course, my sample is bad, because dividing by 10 may result only one decimal and I display 2, so the correct format string is "F1".