![User banner](https://discuss.tchncs.de/pictrs/image/6f304618-7368-4266-9c4d-3de80d43f558.webp)
yeah that's it, forgot a word for it https://en.wikipedia.org/wiki/Evolved_antenna
that ST5 antenna looks like a low-poly two turn helical antenna, but how it looks like will be a function of design requirements
dunno about rockets, but antenna thingy works only because you can simulate performance of antenna very reliably, precisely and quickly. This data was fed back, random small changes were made, things that were an improvement passed to the next iteration. Not sure how this approach is called but none of it is LLM
i think that openai also wanted to solve their problems with fusion, but they got a step further, they made a startup for this. not normal nuclear power plant hot rock machine, no, they want tech that is perpetually Just A Decade Away. it makes some perverse sense if your funding is dependent on misguided hype only
well, it's not there yet https://www.youtube.com/watch?v=tou8ahLZvP4
fusion research is just thinnest disguise for thermonuclear weapons research, especially the inertial confinement fusion variety
ah yes, the Simple English wiki filter but wrong
yeah if you want to have so different pieces working together, you need training that makes exploitation of it all possible, goes without saying
i don't know enough about chinese rifles to speak authoritatively one way or the other, but there was a claim that it's a target from QCB training where they used rubber bullets that tumble no matter what you do
this is how people used guns before john browning was born
is engagement-driven content scoring harmful to everyone with a pulse? does pope shit in the woods? click here to find how
everything with a memory is capable of time travel
if i have to guess, the thing that prevents mobility now is constant surveillance, also by drones + lots of artillery, and some attack drones too. the thing that will enable large scale movements will be air dominance and even more EW
but was Kurzweil sued (and lost) for his bullshit?
Damn, if only.
Drones mostly target humans and crewed vehicles, not other drones (and disable rapidly and suddenly un-crewed vehicles) (with rare exceptions of recon drones crashing other recon drones by breaking their propellers and like 1 or 2 cases of FPV drones shooting down fixed wing recon drones. anti-drone warfare is mostly EW, then AAA and things like MANPADS or even bigger missiles depending on how valuable that drone is as a target)
Besides, last time i've checked it was not drones that took or retook Vovchansk (80% ish Ukrainian controlled last week), it was tanks, arty, mechanized infantry, maybe a dash of CAS and loads of AA and jammers, you know, just like in every war since 80s or even bit earlier. Loads of small cheap PGMs do work great in anti-vehicle role, and drones are just that, so it makes everybody hide fair bit harder
what makes me think that APSs are not a real factor either way is that everyone slaps ERA and anti-drone mesh on everything, which would interfere with radar. APSs historically had a huge blind spot on top, which is a bad thing in a war with drones. also, major user of APSs, IDF, slapped anti-drone grids on top of their tanks at the beginning of current Gaza war, so kinda probably that means that they are not really sure that it works good enough
wait i just noticed that mastodon doesn't show images embedded in comments (there are maps)
it funded some fundamental research, fine by me
yeah, and it's been like this since brits used freshly invented heavy machine guns in their colonial wars. machines killing machines is just what will cause army bean counters to burn at stake operators of these machines
“Humans are generally far removed from the scene of battle.”
if you have budget for that, against an enemy that doesn't
Ukrainians have this thing https://en.wikipedia.org/wiki/Zaslin_Active_Protection_System but i've never seen something like Drozd/Arena used, nor western APSs. plenty of ERA everywhere tho
A YouTuber falls victim to generative AI on Chinese social media, but the ramifications stretch beyond China.
![How AI turned a Ukrainian YouTuber into a Russian](https://lemmy.world/pictrs/image/7d16b907-9d89-4e96-bedf-c39d0ae72f1b.jpeg?format=webp&thumbnail=256)
cross-posted from: https://feddit.de/post/12110745
> "I don't want anyone to think that I ever said these horrible things in my life. Using a Ukrainian girl for a face promoting Russia. It's crazy.” > > Olga Loiek has seen her face appear in various videos on Chinese social media - a result of easy-to-use generative AI tools available online. > > “I could see my face and hear my voice. But it was all very creepy, because I saw myself saying things that I never said,” says the 21-year-old, a student at the University of Pennsylvania. > > The accounts featuring her likeness had dozens of different names like Sofia, Natasha, April, and Stacy. These “girls” were speaking in Mandarin - a language Olga had never learned. They were apparently from Russia, and talked about China-Russia friendship or advertised Russian products. > > “I saw like 90% of the videos were talking about China and Russia, China-Russia friendship, that we have to be strong allies, as well as advertisements for food.” > > One of the biggest accounts was “Natasha imported food” with a following of more than 300,000 users. “Natasha” would say things like “Russia is the best country. It’s sad that other countries are turning away from Russia, and Russian women want to come to China”, before starting to promote products like Russian candies. > > This personally enraged Olga, whose family is still in Ukraine. > > But on a wider level, her case has drawn attention to the dangers of a technology that is developing so quickly that regulating it and protecting people has become a real challenge. > > From YouTube to Xiaohongshu > > Olga’s Mandarin-speaking AI lookalikes began emerging in 2023 - soon after she started a YouTube channel which is not very regularly updated. > > About a month later, she started getting messages from people who claimed they saw her speak in Mandarin on Chinese social media platforms. > > Intrigued, she started looking for herself, and found AI likenesses of her on Xiaohongshu - a platform like Instagram - and Bilibili, which is a video site similar to YouTube. > > “There were a lot of them [accounts]. Some had things like Russian flags in the bio,” said Olga who has found about 35 accounts using her likeness so far. > > After her fiancé tweeted about these accounts, HeyGen, a firm that she claims developed the tool used to create the AI likenesses, responded. > > They revealed more than 4,900 videos have been generated using her face. They said they had blocked her image from being used anymore. > > A company spokesperson told the BBC that their system was hacked to create what they called “unauthorised content” and added that they immediately updated their security and verification protocols to prevent further abuse of their platform. > > But Angela Zhang, of the University of Hong Kong, says what happened to Olga is “very common in China”. > > The country is “home to a vast underground economy specialising in counterfeiting, misappropriating personal data, and producing deepfakes”, she said. > > This is despite China being one of the first countries to attempt to regulate AI and what it can be used for. It has even modified its civil code to protect likeness rights from digital fabrication. > > Statistics disclosed by the public security department in 2023 show authorities arrested 515 individuals for “AI face swap” activities. Chinese courts have also handled cases in this area. > > But then how did so many videos of Olga make it online? > > One reason could be because they promoted the idea of friendship between China and Russia. > > Beijing and Moscow have grown significantly closer in recent years. Chinese leader Xi Jinping and Russian President Putin have said the friendship between the two countries has “no limits”. The two are due to meet in China this week. > > Chinese state media have been repeating Russian narratives justifying its invasion of Ukraine and social media has been censoring discussion of the war. > > “It is unclear whether these accounts were coordinating under a collective purpose, but promoting a message that is in line with the government’s propaganda definitely benefits them,” said Emmie Hine, a law and technology researcher from the University of Bologna and KU Leuven. > > “Even if these accounts aren’t explicitly linked to the CCP [Chinese Communist Party], promoting an aligned message may make it less likely that their posts will get taken down.” > > But this means that ordinary people like Olga remain vulnerable and are at risk of falling foul of Chinese law, experts warn. > > Kayla Blomquist, a technology and geopolitics researcher at Oxford University, warns that “there is a risk of individuals being framed with artificially generated, politically sensitive content” who could be subject to “rapid punishments enacted without due process”. > > She adds that Beijing’s focus in relation to AI and online privacy policy has been to build out consumer rights against predatory private actors, but stresses that “citizen rights in relation to the government remain extremely weak”. > > Ms Hine explains that the “fundamental goal of China’s AI regulations is to balance maintaining social stability with promoting innovation and economic development”. > > “While the regulations on the books seem strict, there’s evidence of selective enforcement, particularly of the generative AI licensing rule, that may be intended to create a more innovation-friendly environment, with the tacit understanding that the law provides a basis for cracking down if necessary,” she said. > > 'Not the last victim’ > > But the ramifications of Olga’s case stretch far beyond China - it demonstrates the difficulty of trying to regulate an industry that seems to be evolving at break-neck speed, and where regulators are constantly playing catch-up. But that doesn’t mean they’re not trying. > > In March, the European Parliament approved the AI Act, the world's first comprehensive framework for constraining the risks of the technology. And last October, US President Joe Biden announced an executive order requiring AI developers to share data with the government. > > While regulations at the national and international levels are progressing slowly compared to the rapid race of AI growth, we need “a clearer understanding of and stronger consensus around the most dangerous threats and how to mitigate them”, says Ms Blomquist. > > “However, disagreements within and among countries are hindering tangible action. The US and China are the key players, but building consensus and coordinating necessary joint action will be challenging,” she adds. > > Meanwhile, on the individual level, there seems to be little people can do short of not posting anything online. > > Meanwhile, on the individual level, there seems to be little people can do short of not posting anything online. > > “The only thing to do is to not give them any material to work with: to not upload photos, videos, or audio of ourselves to public social media,” Ms Hine says. “However, bad actors will always have motives to imitate others, and so even if governments crack down, I expect we’ll see consistent growth amidst the regulatory whack-a-mole.” > > Olga is “100% sure” that she will not be the last victim of generative AI. But she is determined not to let it chase her off the internet. > > She has shared her experiences on her YouTube channel, and says some Chinese online users have been helping her by commenting under the videos using her likeness and pointing out they are fake. > > She adds that a lot of these videos have now been taken down. > > “I wanted to share my story, I wanted to make sure that people will understand that not everything that you're seeing online is real,” says she. “I love sharing my ideas with the world, and none of these fraudsters can stop me from doing that.”
How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.
![Scientists Train AI to Be Evil, Find They Can't Reverse It](https://lemdro.id/pictrs/image/a072cf86-6b4d-4d22-b25b-9836718ca27c.jpeg?format=webp&thumbnail=256)
cross-posted from: https://lemmy.world/post/11178564
> Scientists Train AI to Be Evil, Find They Can't Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.
Video
Click to view this content.
(they didn't learn their lesson)
![](https://discuss.tchncs.de/pictrs/image/80a63bb1-910c-44c3-85cc-86db4786e674.jpeg?thumbnail=1024&format=webp)
russians seem to have launched another offensive on Vuhledar, there won't be any other result so you can pretend this meme is from the future
![](https://discuss.tchncs.de/pictrs/image/d48d88b0-c021-428d-8b7a-8a268d99babe.jpeg?thumbnail=1024&format=webp)
edit: orange bar was entirely too long and also i don't know how gradients work
Watch ""And it's empty - there's no TNT!" - Invaders received shells without explosives." on Streamable.
!["And it's empty - there's no TNT!" - Invaders received shells without explosives.](https://sh.itjust.works/pictrs/image/7c1f7a94-75c9-4758-9a19-0224cabbc1d6.webp?format=webp&thumbnail=256)
cross-posted from: https://lemmy.ca/post/6146353
> https://t.me/operativnoZSU/116474
wrong answers only
Elon Musk secretly ordered his engineers to turn off his company’s Starlink satellite communications network near the Crimean coast last year to disrupt a Ukrainian sneak attack on the Russian naval fleet, according to an excerpt adapted from Walter Isaacson’s new biography of the eccentric billiona...
might be too credible
of course he was afraid of russian nuukes. this only prompted Ukrainian engineers to bypass use of starlink entirely and current sea drones, like the one used in second Kerch bridge strike, or these used against SIG tanker and Olenegorsky Gornyak landing ship use domestic technology only
and these rules are in sidebar. basically it's 1:1 of what rules on r/ncd used to be, taking into account smaller size and lack of flairs. in case you can't read them in sidebar (because for example you're using app that has it broken) these rules are as follows:
1. Be nice
Do not make personal attacks against each other, call for violence against anyone, or intentionally antagonize people in the comment sections.
2. Explain incorrect defense articles & takes
If you want to post a non-credible take, it must be from a "credible" source (news article, politician, or military leader) and must have a comment laying out exactly why it's non-credible. Random twitter and YouTube comments belong in the Low Hanging Fruit thread.
3. Content must be relevant
Posts must be about military hardware or international security/defense. This is not the page to fawn over Youtube personalities, simp over political leaders, or discuss other areas of international policy.
4. No racism / hatespeech
No slurs. No advocating for the killing of people or insulting them based on physical, religious, or ideological traits.
5. No politics
We don't care if you're Republican, Democrat, Socialist, Stalinist, Baathist, or some other hot mess. Leave it at the door. This applies to comments as well.
6. No seriousposting
We don't want your uncut war footage, fundraisers, credible news articles, or other such things. The world is already serious enough as it is.
7. No classified material
Classified information is off limits regardless of how "open source" and "easy to find" it is.
8. Source artwork
If you use somebody's art in your post or as your post, the OP must provide a direct link to the art's source in the comment section, or a good reason why this was not possible (such as the artist deleting their account). The source should be a place that the artist themselves uploaded the art. A booru is not a source. A watermark is not a source.
9. No low-effort posts
No egregiously low effort posts. These include Social media screenshots with a title punchline / no punchline, recent (after the start of the Ukraine War) reposts, simple reaction & template memes, and images with the punchline in the title. Put these in weekly Low effort thread instead.
10. Don't get us banned.
No brigading or harassing other communities. Do not post memes with a "haha people that I hate died… haha" punchline or violating the sh.itjust.works rules (below). This includes content illegal in Canada.