Skip Navigation
Featured
Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024
  • yeah that's it, forgot a word for it https://en.wikipedia.org/wiki/Evolved_antenna

    that ST5 antenna looks like a low-poly two turn helical antenna, but how it looks like will be a function of design requirements

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024
  • dunno about rockets, but antenna thingy works only because you can simulate performance of antenna very reliably, precisely and quickly. This data was fed back, random small changes were made, things that were an improvement passed to the next iteration. Not sure how this approach is called but none of it is LLM

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024
  • i think that openai also wanted to solve their problems with fusion, but they got a step further, they made a startup for this. not normal nuclear power plant hot rock machine, no, they want tech that is perpetually Just A Decade Away. it makes some perverse sense if your funding is dependent on misguided hype only

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024
  • fusion research is just thinnest disguise for thermonuclear weapons research, especially the inertial confinement fusion variety

  • Featured
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 7 July 2024
  • ah yes, the Simple English wiki filter but wrong

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • yeah if you want to have so different pieces working together, you need training that makes exploitation of it all possible, goes without saying

  • Least inaccurate chinese rifle test
  • i don't know enough about chinese rifles to speak authoritatively one way or the other, but there was a claim that it's a target from QCB training where they used rubber bullets that tumble no matter what you do

  • Hammer and firing pin
  • this is how people used guns before john browning was born

  • Is social media fuelling political polarisation?
  • is engagement-driven content scoring harmful to everyone with a pulse? does pope shit in the woods? click here to find how

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • if i have to guess, the thing that prevents mobility now is constant surveillance, also by drones + lots of artillery, and some attack drones too. the thing that will enable large scale movements will be air dominance and even more EW

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • but was Kurzweil sued (and lost) for his bullshit?

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • Damn, if only.

    Drones mostly target humans and crewed vehicles, not other drones (and disable rapidly and suddenly un-crewed vehicles) (with rare exceptions of recon drones crashing other recon drones by breaking their propellers and like 1 or 2 cases of FPV drones shooting down fixed wing recon drones. anti-drone warfare is mostly EW, then AAA and things like MANPADS or even bigger missiles depending on how valuable that drone is as a target)

    Besides, last time i've checked it was not drones that took or retook Vovchansk (80% ish Ukrainian controlled last week), it was tanks, arty, mechanized infantry, maybe a dash of CAS and loads of AA and jammers, you know, just like in every war since 80s or even bit earlier. Loads of small cheap PGMs do work great in anti-vehicle role, and drones are just that, so it makes everybody hide fair bit harder

  • what if, right, what *if* our super-duper-autocomplete was just *tricking* us so it could TAKE OVER ZEE VORLD AHAHAHAHAHAHA! that'd be wild, hey
  • what makes me think that APSs are not a real factor either way is that everyone slaps ERA and anti-drone mesh on everything, which would interfere with radar. APSs historically had a huge blind spot on top, which is a bad thing in a war with drones. also, major user of APSs, IDF, slapped anti-drone grids on top of their tanks at the beginning of current Gaza war, so kinda probably that means that they are not really sure that it works good enough

  • It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)
  • wait i just noticed that mastodon doesn't show images embedded in comments (there are maps)

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • yeah, and it's been like this since brits used freshly invented heavy machine guns in their colonial wars. machines killing machines is just what will cause army bean counters to burn at stake operators of these machines

  • We regret to inform you that Ray Kurzweil is back on his bullshit
  • “Humans are generally far removed from the scene of battle.”

    if you have budget for that, against an enemy that doesn't

  • what if, right, what *if* our super-duper-autocomplete was just *tricking* us so it could TAKE OVER ZEE VORLD AHAHAHAHAHAHA! that'd be wild, hey
  • Ukrainians have this thing https://en.wikipedia.org/wiki/Zaslin_Active_Protection_System but i've never seen something like Drozd/Arena used, nor western APSs. plenty of ERA everywhere tho

  • How Chinese AI turned a Ukrainian YouTuber into a Russian
    www.bbc.com How AI turned a Ukrainian YouTuber into a Russian

    A YouTuber falls victim to generative AI on Chinese social media, but the ramifications stretch beyond China.

    How AI turned a Ukrainian YouTuber into a Russian

    cross-posted from: https://feddit.de/post/12110745

    > "I don't want anyone to think that I ever said these horrible things in my life. Using a Ukrainian girl for a face promoting Russia. It's crazy.” > > Olga Loiek has seen her face appear in various videos on Chinese social media - a result of easy-to-use generative AI tools available online. > > “I could see my face and hear my voice. But it was all very creepy, because I saw myself saying things that I never said,” says the 21-year-old, a student at the University of Pennsylvania. > > The accounts featuring her likeness had dozens of different names like Sofia, Natasha, April, and Stacy. These “girls” were speaking in Mandarin - a language Olga had never learned. They were apparently from Russia, and talked about China-Russia friendship or advertised Russian products. > > “I saw like 90% of the videos were talking about China and Russia, China-Russia friendship, that we have to be strong allies, as well as advertisements for food.” > > One of the biggest accounts was “Natasha imported food” with a following of more than 300,000 users. “Natasha” would say things like “Russia is the best country. It’s sad that other countries are turning away from Russia, and Russian women want to come to China”, before starting to promote products like Russian candies. > > This personally enraged Olga, whose family is still in Ukraine. > > But on a wider level, her case has drawn attention to the dangers of a technology that is developing so quickly that regulating it and protecting people has become a real challenge. > > From YouTube to Xiaohongshu > > Olga’s Mandarin-speaking AI lookalikes began emerging in 2023 - soon after she started a YouTube channel which is not very regularly updated. > > About a month later, she started getting messages from people who claimed they saw her speak in Mandarin on Chinese social media platforms. > > Intrigued, she started looking for herself, and found AI likenesses of her on Xiaohongshu - a platform like Instagram - and Bilibili, which is a video site similar to YouTube. > > “There were a lot of them [accounts]. Some had things like Russian flags in the bio,” said Olga who has found about 35 accounts using her likeness so far. > > After her fiancé tweeted about these accounts, HeyGen, a firm that she claims developed the tool used to create the AI likenesses, responded. > > They revealed more than 4,900 videos have been generated using her face. They said they had blocked her image from being used anymore. > > A company spokesperson told the BBC that their system was hacked to create what they called “unauthorised content” and added that they immediately updated their security and verification protocols to prevent further abuse of their platform. > > But Angela Zhang, of the University of Hong Kong, says what happened to Olga is “very common in China”. > > The country is “home to a vast underground economy specialising in counterfeiting, misappropriating personal data, and producing deepfakes”, she said. > > This is despite China being one of the first countries to attempt to regulate AI and what it can be used for. It has even modified its civil code to protect likeness rights from digital fabrication. > > Statistics disclosed by the public security department in 2023 show authorities arrested 515 individuals for “AI face swap” activities. Chinese courts have also handled cases in this area. > > But then how did so many videos of Olga make it online? > > One reason could be because they promoted the idea of friendship between China and Russia. > > Beijing and Moscow have grown significantly closer in recent years. Chinese leader Xi Jinping and Russian President Putin have said the friendship between the two countries has “no limits”. The two are due to meet in China this week. > > Chinese state media have been repeating Russian narratives justifying its invasion of Ukraine and social media has been censoring discussion of the war. > > “It is unclear whether these accounts were coordinating under a collective purpose, but promoting a message that is in line with the government’s propaganda definitely benefits them,” said Emmie Hine, a law and technology researcher from the University of Bologna and KU Leuven. > > “Even if these accounts aren’t explicitly linked to the CCP [Chinese Communist Party], promoting an aligned message may make it less likely that their posts will get taken down.” > > But this means that ordinary people like Olga remain vulnerable and are at risk of falling foul of Chinese law, experts warn. > > Kayla Blomquist, a technology and geopolitics researcher at Oxford University, warns that “there is a risk of individuals being framed with artificially generated, politically sensitive content” who could be subject to “rapid punishments enacted without due process”. > > She adds that Beijing’s focus in relation to AI and online privacy policy has been to build out consumer rights against predatory private actors, but stresses that “citizen rights in relation to the government remain extremely weak”. > > Ms Hine explains that the “fundamental goal of China’s AI regulations is to balance maintaining social stability with promoting innovation and economic development”. > > “While the regulations on the books seem strict, there’s evidence of selective enforcement, particularly of the generative AI licensing rule, that may be intended to create a more innovation-friendly environment, with the tacit understanding that the law provides a basis for cracking down if necessary,” she said. > > 'Not the last victim’ > > But the ramifications of Olga’s case stretch far beyond China - it demonstrates the difficulty of trying to regulate an industry that seems to be evolving at break-neck speed, and where regulators are constantly playing catch-up. But that doesn’t mean they’re not trying. > > In March, the European Parliament approved the AI Act, the world's first comprehensive framework for constraining the risks of the technology. And last October, US President Joe Biden announced an executive order requiring AI developers to share data with the government. > > While regulations at the national and international levels are progressing slowly compared to the rapid race of AI growth, we need “a clearer understanding of and stronger consensus around the most dangerous threats and how to mitigate them”, says Ms Blomquist. > > “However, disagreements within and among countries are hindering tangible action. The US and China are the key players, but building consensus and coordinating necessary joint action will be challenging,” she adds. > > Meanwhile, on the individual level, there seems to be little people can do short of not posting anything online. > > Meanwhile, on the individual level, there seems to be little people can do short of not posting anything online. > > “The only thing to do is to not give them any material to work with: to not upload photos, videos, or audio of ourselves to public social media,” Ms Hine says. “However, bad actors will always have motives to imitate others, and so even if governments crack down, I expect we’ll see consistent growth amidst the regulatory whack-a-mole.” > > Olga is “100% sure” that she will not be the last victim of generative AI. But she is determined not to let it chase her off the internet. > > She has shared her experiences on her YouTube channel, and says some Chinese online users have been helping her by commenting under the videos using her likeness and pointing out they are fake. > > She adds that a lot of these videos have now been taken down. > > “I wanted to share my story, I wanted to make sure that people will understand that not everything that you're seeing online is real,” says she. “I love sharing my ideas with the world, and none of these fraudsters can stop me from doing that.”

    5
    Cultists Draw a Boogeyman on Cardboard, Become Afraid Of It
    futurism.com Scientists Train AI to Be Evil, Find They Can't Reverse It

    How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

    Scientists Train AI to Be Evil, Find They Can't Reverse It

    cross-posted from: https://lemmy.world/post/11178564

    > Scientists Train AI to Be Evil, Find They Can't Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

    8
    slight update

    russians seem to have launched another offensive on Vuhledar, there won't be any other result so you can pretend this meme is from the future

    3
    anyone else has this problem? no? okay fine

    edit: orange bar was entirely too long and also i don't know how gradients work

    15
    Defense Experts™, what could be credible field expedient fill for these shells?
    streamable.com "And it's empty - there's no TNT!" - Invaders received shells without explosives.

    Watch ""And it's empty - there's no TNT!" - Invaders received shells without explosives." on Streamable.

    "And it's empty - there's no TNT!" - Invaders received shells without explosives.

    cross-posted from: https://lemmy.ca/post/6146353

    > https://t.me/operativnoZSU/116474

    wrong answers only

    11
    Ukrainian surface drones had Starlink comms disabled on direct Musk's orders
    web.archive.org CNN Exclusive: 'How am I in this war?': New Musk biography offers fresh details about the billionaire's Ukraine dilemma | CNN Politics

    Elon Musk secretly ordered his engineers to turn off his company’s Starlink satellite communications network near the Crimean coast last year to disrupt a Ukrainian sneak attack on the Russian naval fleet, according to an excerpt adapted from Walter Isaacson’s new biography of the eccentric billiona...

    might be too credible

    of course he was afraid of russian nuukes. this only prompted Ukrainian engineers to bypass use of starlink entirely and current sea drones, like the one used in second Kerch bridge strike, or these used against SIG tanker and Olenegorsky Gornyak landing ship use domestic technology only

    20
    we have rules now

    and these rules are in sidebar. basically it's 1:1 of what rules on r/ncd used to be, taking into account smaller size and lack of flairs. in case you can't read them in sidebar (because for example you're using app that has it broken) these rules are as follows:

    1. Be nice

    Do not make personal attacks against each other, call for violence against anyone, or intentionally antagonize people in the comment sections.

    2. Explain incorrect defense articles & takes

    If you want to post a non-credible take, it must be from a "credible" source (news article, politician, or military leader) and must have a comment laying out exactly why it's non-credible. Random twitter and YouTube comments belong in the Low Hanging Fruit thread.

    3. Content must be relevant

    Posts must be about military hardware or international security/defense. This is not the page to fawn over Youtube personalities, simp over political leaders, or discuss other areas of international policy.

    4. No racism / hatespeech

    No slurs. No advocating for the killing of people or insulting them based on physical, religious, or ideological traits.

    5. No politics

    We don't care if you're Republican, Democrat, Socialist, Stalinist, Baathist, or some other hot mess. Leave it at the door. This applies to comments as well.

    6. No seriousposting

    We don't want your uncut war footage, fundraisers, credible news articles, or other such things. The world is already serious enough as it is.

    7. No classified material

    Classified information is off limits regardless of how "open source" and "easy to find" it is.

    8. Source artwork

    If you use somebody's art in your post or as your post, the OP must provide a direct link to the art's source in the comment section, or a good reason why this was not possible (such as the artist deleting their account). The source should be a place that the artist themselves uploaded the art. A booru is not a source. A watermark is not a source.

    9. No low-effort posts

    No egregiously low effort posts. These include Social media screenshots with a title punchline / no punchline, recent (after the start of the Ukraine War) reposts, simple reaction & template memes, and images with the punchline in the title. Put these in weekly Low effort thread instead.

    10. Don't get us banned.

    No brigading or harassing other communities. Do not post memes with a "haha people that I hate died… haha" punchline or violating the sh.itjust.works rules (below). This includes content illegal in Canada.

    20
    skillissuer skillissuer @discuss.tchncs.de

    i should be writing

    Posts 25
    Comments 1.2K
    Moderates