YourNetworkIsHaunted @ YourNetworkIsHaunted @awful.systems Posts 0Comments 689Joined 1 yr. ago
AI Overturns Centuries of Forensic Fingerprinting Practice?
Published in Science... Advances
Probably a fair bit to sneer at in the actual study that I'm missing, and the article I first found it in is peak AI Hype. (Big Forensics is trying to keep you from knowing the Truth as found by an undergrad with a GPU) But the part that I found most concerning is that even the whole paper doesn't appear to break down their 77% accuracy index and provide the specific result ratios that go into it. In a field where each false positive represents a step on the road to innocent people being convicted of major crimes I would really like to know that number specifically.
Looks like that is indeed the post. I have a number of complaints, but the most significant one is actually in the early part of the narrative where they just assume "companies start to integrate AI" with little detail on how this is done, what kind of value it creates over their competitors, whether it's profitable for anyone, etc. I'm admittedly trusting David Gerard's and Ed Zitron's overall financial analysis here, but at present it seems like the trajectory is moving in the opposite direction, with the AI industry as a whole looking likely to flame out as they burn through their ability to raise capital without ever actually finding a net return on that investment. At which point all the rest of it is sci-fi nonsense. Like, if you want to tell me a story about how we get from here to The Culture (or I Have No Mouth and I Must Scream), those are the details that need to be filled in. How do the intermediate steps actually work. Otherwise it's the same story we've been reading since the 70s.
I thought you had to wait at least a few generations to start inventing bullshit evo-psych-adjacent explanations for stuff.
Also this joke was funny when XKCD did it in the alt text 16 years ago. Jesus how has it been 16 years what the hell
Neither, actually. They were testing it by asking how many "T"s appeared in "Llama Four" and it kept saying "2" so they decided to roll with it.
I also legitimately can't tell the degree to which they don't understand they're LARPing a dystopia versus how much they completely understand that and that's why it's gonna be so awesome for them once they make fetch happen.
A massive domestic infrastructure project with little actual demand? In China of all places? I don't believe it
It's almost like the tech industry relies on a great deal of general stability, education, and other aspects of society that are broadly considered the responsibility of the state. I think there's a stock line here about libertarians and cats?
If he got into specifics people might think "Damn, I've never had that kind of experience with working-class New Yorkers. What gives?" and he might have to consider let alone admit that he was an asshole to someone.
Given the apparent state of the art for autogenerated captions (and by extension the initial challenge of speech recognition) being firmly in the "good enough" range I would not trust the chain of speech recognition -> translation -> text-to-speech. That's a lot of room for errors to chain, multiply, and obscure themselves through GIGO even if the latter two steps did work as expected.
So the primary doctrine is basically tech bros rewriting standard millenarian christianity from mythic fantasy into science fiction. But it seems like the founder wants to be a silicon valley influencer more than he wants to be a proper cult leader, meaning that some of the people who take this shit seriously have accumulated absurd amounts of money and power and occasionally the more deranged subgroups will spin off into a proper cult with everything that entails -- including, now, being involved in multiple homicides!
That's like a solid centiMoR, which is conveniently the share of HPMoR in which anything actually happens.
Jesus, fine, I'll watch it already, God.
Your mistake, distant future ghost, was in developing RNA repair nanites without creating universal healthcare.
There's a particular failure mode at play here that speaks to incompetent accounting on top of everything else. Like, without autocontouring how many additional radiologists would need to magically be spawned inti existence and get salaries, benefits, pensions, etc in order to reduce overall wait times by that amount? Because in reality that's the money being left on the table; the fact that it's being made up in shitty service rather than actual money shouldn't meaningfully affect the calculus there.
By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
There's inarguably an organizational culture that is fundamentally disinterested in the things that the organization is supposed to actually do. Even if they aren't explicitly planning to end social security as a concept by wrecking the technical infrastructure it relies on, they're almost comedically apathetic about whether or not the project succeeds. At the top this makes sense because politicians can spin a bad project into everyone else's fault, but the fact that they're able to find programmers to work under those conditions makes me weep for the future of the industry. Even simple mercenaries should be able to smell that this project is going to fail and look awful on your resume, but I guess these yahoos are expecting to pivot into politics or whatever administration position they can bargain with whoever succeeds Trump.
That's fascinating, actually. Like, it seems like it shouldn't be possible to create this level of grammatically correct text without understanding the words you're using, and yet even immediately after defining "unsupervised" correctly the system still (supposedly) immediately sets about applying a baffling number of alternative constraints that it seems to pull out of nowhere.
OR alternatively despite letting it "cook" for longer and pregenerate a significant volume of its own additional context before the final answer the system is still, at the end of the day, an assembly of sochastic parrots who don't actually understand anything.
I don't think that the actual performance here is as important as the fact that it's clearly not meaningfully "reasoning" at all. This isn't a failure mode that happens if it's actually thinking through the problem in front of it and understanding the request. It's a failure mode that comes from pattern matching without actual reasoning.
write it out in ASCII
My dude what do you think ASCII is? Assuming we're using standard internet interfaces here and the request is coming in as UTF-8 encoded English text it is being written out in ASCII
Sneers aside, given that the supposed capability here is examining a text prompt and reason through the relevant information to provide a solution in the form of a text response this kind of test is, if anything, rigged in favor of the AI compared to some similar versions that add in more steps to the task like OCR or other forms of image parsing.
It also speaks to a difference in how AI pattern recognition compared to the human version. For a sufficiently well-known pattern like the form of this river-crossing puzzle it's the changes and exceptions that jump out. This feels almost like giving someone a picture of the Mona Lisa with aviators on; the model recognizes that it's 99% of the Mona Lisa and goes from there, rather than recognizing that the changes from that base case are significant and intentional variation rather than either a totally new thing or a 'corrupted' version of the original.
The classic "I don't understand something therefore it must be incomprehensible" problem. Anyone who does understand it must therefore be either lying or insane. I'm not sure if we've moved forward or backwards by having the incomprehensible eldritch truth be progressive social ideology itself rather than the existence of black people and foreign cultures.