Might be a valuable thing to do if not for the fact that Amazon reviews have basically zero quality control. If Amazon themselves didn't have corrupt motivations to host illegitimate reviews that mislead their own consumers into buying low-quality products.
But if you listen to Amazon reviews at this point, I have an off-brand bridge to sell you.
When i am interested in a thing, I go out of my way to read the comments for peoples real experience of a thing. Shit is expensive, and want to know if it will be worth it.
I dont want to read a simulated review based on average comments, wtf.
🤖 I'm a bot that provides automatic summaries for articles:
Click here to see the summary
The summaries have been in testing for at least a couple of months, and they’re now more widely available to a “subset” of users in the US on Amazon’s mobile app.
Amazon says they’re available “across a broad selection of products.” So far, we’ve seen them on TVs, headphones, tablets, and fitness trackers.
They also seem to focus primarily on the positives of the product, spending less time on the negatives and leaving them for the end.
That said, that could be because Amazon’s search already elevates highly rated products, so it’s hard to find summaries of anything that people have been particularly frustrated by.
The feature can be found at the top of the review section on mobile under the heading “Customers say.” At the end, the paragraph includes a note that it was AI-generated.
Summarizing customer reviews has turned out to be one of the more obvious and easy to implement uses of generative AI.
Is Generative AI really what is used here? I would've assumed a LLM to vastly outperform stuff like a GAN for a summary task? Or do we colloquially group those two under "generative AI?" But if we do then aren't basically all AI generative since making something is what we typically use code for? What would be an example of a non-generative AI then?
I think the reason people refer to LLMs as generative comes from the term GPT, which is short for generative pre-trained transformer I believe. At its core, it generates new outputs based on previous ones, and its purpose is to create new content. There are plenty of models that are not generative, like dedicated classifiers (think sentiment analyzers, models that try to identify what an object is, etc).