this is not ragebait rule
this is not ragebait rule
i hope this doesn't cause too much hate. i just wanna know what u people and creatures think <3
this is not ragebait rule
i hope this doesn't cause too much hate. i just wanna know what u people and creatures think <3
You're viewing a single thread.
I honestly am skeptical about the medical stuff. Machine learning can't even do the stuff it should be good at reliably, specifically identifying mushrooms/mycology in general.
that is interesting. i know that there are plenty of plant recognition onces, and recently there have been some classifiers specifically trained on human skin to see if it's a tumor or not. that one is better than a good human doctor in his field, so i wonder what happened to that mushroom classifier. Maybe it is too small to generalize or has been train in a specific environment.
I haven't looked closely enough to know, but I recall medical image analytics being "better than human" well before the current AI/LLM rage. Like, those systems use machine learning, but in a more deterministic, more conventional algorithm sense. I think they are also less worried about false positives, because the algorithm is always assumed to be checked by a human physician, so my impression is that the real sense in which medical image analysis is 'better' is that it identifies smaller or more obscure defects that a human quickly scanning the image might overlook.
If you're using a public mushroom identification AI as the only source for life-and-death choice, then false positives are a much bigger problem.
yes, that is what i have heard too. there was a news thing some days ago that this "cancer scanner" thing will be available in two years to all doctors. so that's great! but yes, we very much still need a human to watch over it, so its out-of-distribution-generations stay in check.
Do not trust AI to tell you if you can eat a mushroom. Ever. The same kinds of complexity goes into medicine. Sure, the machine learning process can flag something as cancerous (for example), but will always and forever need human review unless we somehow completely change the way machine learning works and speed it up by an order of magnitude.
yeah, we still very much need to have real humans go "yes, this is indeed cancer", but this ai cancer detection feels like a reasonable "first pass" to quickly get a somewhat good estimation, rather than no estimation with lacking doctors.
Sorry in advance for being captain obvious, but I feel like I can't get over this. Your comment is *valuable and I completely agree with your take here, but then the elephant in the room is: how do the people with power actually choose to use these tools? It's not like I can effect change on healthcare AI use on my own.
So yes, it really can be first pass, good sanity check type of tool. It could help a good doctor if it was employed in a sane and useful way. And if the people with power over the system choose to use that way, I believe it would be a genuine benefit to a majority of humanity, worth the cost of its creation and maintenance.
Or, it could be used to second guess the doctors, cram more cases through without paying them fairly, or "justify" not having enough qualified experts to match our collective need.
Just framing how it is used a little bit differently suddenly takes us from genuine benefit to humanity, into profit-seeking for the 1% and lower quality of life for the remainder of us. That is by far my largest concern with this. I suppose that's my largest concern with a lot of things right now.
yes, currently ai is largely being marketed to evil businesses wanting to automate some humans away. and in healthcare, especially in the US i fear, this will likely catch on.
it's simply more cost-effective, while also being generally more reliable (better than humans even) at very specific tasks. buuuuut not all tasks. so we still have to keep around a doctor since they are needed for physical tests and such.
this amount of exclusively profit-driven stuff is - really sad. u would expect "health" companies to actually want to make u well off... but no they jus wan ur moni. big sad.
i am very sorry for everyone who has to live in this reality.
Having worked with ML in manufacturing, if your task is precise enough and your input normalized enough, it can detect very impressive things. Identifying mushrooms as a whole is already too grand a task, especially as it as to deal with different camera angles, lighting ... But ask it to differentiate between a few species, and always offer pictures using similar angles, lighting and background, and the results will most likely be stellar.
Like I said, I'm just skeptical. I know it can do impressive things, but unless we get a giant leap forward, it will always need extensive human review when it comes to medicine (like my mycology example). In my opinion, it is a tool for quick and dirty analysis in the medical field which may speed things up for human review.
From what little I know if it, it's sorta twofold what it does:
In reality, I am sure that practices and hospital systems are just going to use this as an excuse to say "You don't need to spend as much time on documentation and chart review now so you can see more patients, right?" It's the cotton gin issue.