Confused about multiplying floating-point & integer values
I'm currently doing Dr. Charles Severence's lessons on FreeCodeCamp to try to learn Python3. I'm on lesson exercise 02_03 and confused about multiplying floating-point and integer values.
The goal is to write a Python program multiplying hours worked by pay rate to come up with a pay quantity.
This is the code I wrote:
h = input("Enter hours: ")
r = input("Enter pay rate: ")
p = float(h) * r
I got a traceback error, and the video said the correct way to solve said error was change Line 3 from p = float(h) * r to p = float(h) * float(r).
However, what I'm confused about is why would I need to change r to a floating-point value when it's already a floating-point value (since it'd be a currency value like 5.00 or something once I typed it in per the input() command*?
What am I missing here?
*I can't remember: are the individual commands in a python line called "commands"?
Edit: Wrote plus signs in my post here instead of asterisks. Fixed.
EDIT: Thanks to @Labna@lemmy.world and @woop_woop@lemmy.world. I thought that the input() function was a string until the end-user types something in upon being prompted, and then becomes a floating-point value or integer value (or stays a string) according to what was typed.
This is incorrect: the value is a string regardless of what is typed unless it is then converted to another type.
Honestly, I had a bunch of little confusions. I thought the input() function was only a string until the user typed in a value when prompted, and then it became either an integer value or a floating-point value depending on what you typed in.
Thanks to Labna@lemmy.world and your other response, I understand that it is always a string regardless until you convert it after the fact.
Also, I meant to type an asterisk instead of a plus sign when typing over my code snippet into my post. Fixed now.
Also, to answer your last question, if I do h+r or h*r, I get "5010" for the former (which makes sense) and the standard "can't multiply sequence by non-int of type 'str'", which also makes sense to me now that I understand the above point.
I think I understand where you get confused. The returned value of the input function is always a string, you have to convert it into a number before using it in calculation. Otherwise the auto-parser will convert everything into string. Even if you use * or **.
But I thought the "value" doesn't exist until the end-user types in the value, due to the use of input(). So it starts off as a string, then becomes whatever is typed in, which then gets filtered through the next line. So if I type 3, it'll be considered as an integer, and likewise as a float if I type 3.00.
it is the responsibility of your program to validate and do whatever you want with the result, and part of that can include casting it to a different type.