Emergent Technology

Deep Fake News! Strap yourselves in; the next few months are going to be interesting.

AI platform allegedly bans journalist over fake Trump arrest images

Trump family names are also seemingly blocked on AI-imager Midjourney.

AI-generated photo faking Donald Trump's possible arrest, created by Eliot Higgins using Midjourney v5.'s possible arrest, created by Eliot Higgins using Midjourney v5.

AI-generated photo faking Donald Trump's possible arrest, created by Eliot Higgins using Midjourney v5.
Priest 17 pic seems to have caught a lot of folk out too. This will continue to get worse for at least another few months before it gets better.
FsEy5bVWcAAs4xN-800x980.jpg
 
I've already given my opinion here - AI will turn the web to even worse garbage than it is at great speed and under the same pressures that have turned it into the mess it is - pursuit of personal and corporate gain.

These technologies can already create "content" at rates that outstrip any human and the availability of compute for rent is expanding. There are already whole genres of youtube that appear to be algorithmically generated especially in kids videos - like the five finger family stuff and worse.

I've mentioned on this site about coming across what is plainly algorithmic ad and product production when there was an internet fuss about shirts online that said "End Down's Syndrome".

The New AIs are just going to spread these sorts of things into every part of the internet at speed and the people doing it will benefit - at least until the competition gets high. AIs will learn to make their content more attention grabbing but at the same time AI content will rapidly swamp human content and AI generated content will spiral the drain in the same way that Pandora, if anyone remembers that app, would eventually end up playing you the same three songs on rotation if you didn't actively police it constantly.

The only way to prevent the AI cannibalizing it's own output and going psychotic like a human would if locked in a sensory deprivation tank for years is to give the internet senses - open it up to receiving input from reality but also give it ways to test that reality. That is to say, the only way to save the internet from uselessness is to make it physically dangerous.

I don't see a realistic way to stop this. It's a classic arms race. Even if it's not allowed, whoever isn't using it will lose. If you are in business or you are seeking political power (the power to influence other people to do what you want) then not using AI to generate your content will leave you smothered by those who do - and there will always be people who do.
 
The only way to prevent the AI cannibalizing it's own output and going psychotic like a human would if locked in a sensory deprivation tank for years is to give the internet senses - open it up to receiving input from reality but also give it ways to test that reality. That is to say, the only way to save the internet from uselessness is to make it physically dangerous.
That's more or less my thinking too. The "six-month pause", as proposed, is idiotically naive, to the point that I'm not sure they're even serious. Smells of some kind of PR exercise, rather than a genuine attempt to ameliorate. That said, even if things unravel to awfulness, I think the internet will continue to be useful, just in more specifically targeted ways. And we'll look back on the gay abandon of late 90s to early 2000s with nostalgic fondness.
I don't see a realistic way to stop this. It's a classic arms race. Even if it's not allowed, whoever isn't using it will lose. If you are in business or you are seeking political power (the power to influence other people to do what you want) then not using AI to generate your content will leave you smothered by those who do - and there will always be people who do.
Indeed, the genie is well and truly out of the bottle.
 
I'm a little late to this thread, so I haven't read all 5 pages yet. Forgive me if you've already covered this, but I have to chime in about AI projects like ChatGPT.

About a month ago now, I ran across ChatGPT, or it was brought to my attention. I forget how I discovered it, but... Holy Shit... As a guy whose job includes Microsoft 365 Power Platform development, I understand the issues and controversy behind it, but I honestly have fallen in love with this project. In my day-to-day, it has turned infuriating typo and other simple coding errors which used to take hours to walk through, into a 2 minute cut and paste fix.

I've also been about to ask and archive a LOT of "how do I" questions for my team, helping each of them in turn become more efficient at their jobs. I've just gotten into ChatGPT Plus (yes, I pay $10/month to have better access) and other programs like MidJourney (graphics creator) and I'm having a great time learning all this stuff...

I see Elon and everyone else asking for a 6-month pause to allow for the government to catch up and regulate AI into oblivion, and I get it. I really do, but as long as the 200+ programs now in existence like ChatGPT, Bing Chat, and others make my day-to-day job much better and much more efficient, I have to say "bleep those guys...."
 

A passenger aircraft that flies around the world at Mach 9? Sure, why not

“How much does the world change if you can get anywhere in an hour?”

Concept art of Venus Aerospace's Stargazer aircraft.

Enlarge / Concept art of Venus Aerospace's Stargazer aircraft.
Venus Aerospace

HOUSTON SPACEPORT—On a cloudy day in late March, Andrew Duggleby guided me a safe distance away from a rocket engine. We did not have to go far, maybe 50 meters, because the prototype engine designed and built by his small engineering team is not that large.
We waited for a few minutes before steam began to hiss out of the engine. And then, for a few seconds, the engine emitted a distinctive whistling sound. "There it is!" Duggleby exclaimed. By it, he meant the sound of a rotating detonation engine firing after its ignition. The sound indicated that a reaction was successfully rotating at 20,000 times a second around the engine.

Duggleby is chief technology officer of a company he co-founded with his wife, Sassie. Venus Aerospace has the goal of building a hypersonic aircraft that can carry perhaps a dozen passengers and travel at the astonishingly fast speed of Mach 9, or more than 11,000 kilometers an hour.
“How much does the world change if you can get anywhere in an hour?” Sassie Duggleby asked me.
 
About a month ago now, I ran across ChatGPT, or it was brought to my attention. I forget how I discovered it, but... Holy Shit... As a guy whose job includes Microsoft 365 Power Platform development, I understand the issues and controversy behind it, but I honestly have fallen in love with this project. In my day-to-day, it has turned infuriating typo and other simple coding errors which used to take hours to walk through, into a 2 minute cut and paste fix.
Yes, it's amazingly good at certain things and I've been having fun with it and other "AI" platforms for a few months now. When I was still a developer ChatGPT would have saved me tons of time, generating boilerplate plate code. I'm working as a school teacher for now and ChatGPT brings its own issues into that sector which will have to be addressed but I've already used it for a teaching point due to it sometimes spitting out code with errors. As someone with zero graphic design chops, I've also found the image generating stuff useful for ideas for "record sleeves" to go along with music I publish on Spotify and elsewhere.

Despite all of that and how much fun I'm having with it, I expect the disruption to be rather significant over the next wee while and don't think we've seen anything yet. Chaos ahoy. :jerry:

Here's just one example from today:

Stable Diffusion copyright lawsuits could be a legal earthquake for AI

Experts say generative AI is in uncharted legal waters.

 
Last edited:
The example given of Mickey Mouse in front of a McDonald's sign - both of those are trademarks and fall under different law.
As to the copyright claims on images that were used as input into image AIs, my personal opinion is that if the output was created by humans using the copyright images for inspiration there would be no case as the creations would be clearly transformative rather than just copies.

Also, technically, the images haven't been copied into the AI image generators - I'm pretty sure it would be impossible to create a prompt to reproduce the original artworks so in no meaningful way have the copyright images been copied. What the complainants are arguably trying to assert are looking rights - or an expansion of copyright ad absurdum: the right to make money on anything anyone (or thing) produces after seeing a copyrighted work.
 
I'm pretty sure it would be impossible to create a prompt to reproduce the original artworks...
At the moment, maybe. However, I see no reason to suppose this will remain forever out of reach. Even in the linked article we have this:
Stable Diffusion doesn’t generate direct copies like this very often. Researchers tried to reproduce 350,000 images from Stable Diffusion’s training set but only succeeded with 109 of them..
It's already messy and it's going to get a whole lot messier.
 
At the moment, maybe. However, I see no reason to suppose this will remain forever out of reach. Even in the linked article we have this:

It's already messy and it's going to get a whole lot messier.
Those "copies" aren't actual copies though - and they leave the watermarks in place on watermarked images while degrading the quality. To get a closely similar image you have to come up with a pretty specific prompt. If someone was wanting to rip of some Getty Images it would be better just to grab a public image from the site (the same public images that the image AIs use) and use regular paint tools to remove the watermark.
That it may be technically possible to illicit images highly similar to source images, it's not a practical or effective tool to do that and the only images it was using were the ones that are available to the public anyway. If some nefarious villain were able to create a clone of a copyrighted image using some image AI for profit, it's probably easy enough to go after the evil genius rather than the AI - various graphical tools can be used to infringe on images but those tools aren't liable and AI image generators are very bad tools for the purpose of infringing.
 

NYPD robocops: Hulking, 400-lb robots will start patrolling New York CityMayor says new surveillance bots are "only the beginning" of police force revamp.

NYC Mayor Eric Adams holds a press conference with members of the NYPD and Boston Dynamics' Spot.

NYC Mayor Eric Adams holds a press conference with members of the NYPD and Boston Dynamics' Spot.
Michael Appleton/Office of the Mayor of New York City

The New York Police Department is bringing back the idea of policing the city with robots. The department experimented with Boston Dynamics' Spot in 2021 and shut the project down after a public outcry from civil liberties groups. The idea is being brought back by NYC's new mayor, Eric Adams, who was elected in 2022 and described himself multiple times during the announcement as a "computer geek." Adams is a former NYPD captain and ran on a platform of reducing crime.

Most police departments already have an arsenal of robots, but they are usually for bomb disposal, not the day-to-day patrolling work that New York City envisions. Bomb disposal robots are usually just fancy remote-controlled cars—totally 'dumb' remote-control devices that have no automation and require one or several people to operate. NYC wants semi-autonomous robots patrolling the streets. Adams says, "If we were not willing to move forward and use technology on how to properly keep cities safe, then you will not keep up with those doing harmful things."
 

Elon Musk purchases thousands of GPUs for generative AI project, despite signing cautionary AI "pause" letter.

Despite recently calling for a six-month pause in the development of powerful AI models, Elon Musk recently purchased roughly 10,000 GPUs for a generative AI project within Twitter, reports Business Insider, citing people familiar with the company. The exact nature of the project, however, is still a mystery.

GPUs, or graphics processing units, are purpose-built chips originally designed for computer graphics, but their massively parallel designs make them ideal for doing generative AI processing as well. Training (creating) a new AI model usually requires a large amount of computing power, including many GPUs, which means that Musk's acquisition could represent a significant commitment toward developing a deep-learning AI model within Twitter.

In late February, The Information broke news that Musk had approached AI researchers to form a new AI lab to compete with OpenAI's ChatGPT, including former DeepMind researchers Igor Babuschkin and Manuel Kroiss. Earlier, Musk had publicly complained about bias in OpenAI's products, saying they were too "woke." Musk co-founded OpenAI in 2015 but left the company after an internal disagreement in 2018.

As for the Twitter-based AI project, Business Insider reports that it's a large language model (LLM), the type of generative AI tech that powers ChatGPT. The firm could potentially utilize its massive library of user tweets to help train the model for natural language output.
 
... as long as the 200+ programs now in existence like ChatGPT, Bing Chat, and others make my day-to-day job much better and much more efficient, I have to say "bleep those guys...."

Below is an open source LLM that lets you roll your own, if you have the computing grunt (I installed an OS image generator on my puny Mac Mini and it slowed it to a crawl so giving this one a miss). You can also build your own web-based AI chatbots and image generators via AWS .

“A really big deal”—Dolly is a free, open source, ChatGPT-style AI model

Dolly 2.0 could spark a new wave of fully open source LLMs similar to ChatGPT.

The Databricks Dolly logo


On Wednesday, Databricks released Dolly 2.0, reportedly the first open source, instruction-following large language model (LLM) for commercial use that's been fine-tuned on a human-generated data set. It could serve as a compelling starting point for homebrew ChatGPT competitors.
 
Last edited:
Some of today's AI nonsense. Music and vocal impersonation this time:

Drake and The Weeknd AI song pulled from Spotify and Apple

I don't have that much sympathy for the music industry. They don't actually care about artists, they care that they have signed the artists for their unique voices (which they then autotune the hell out of) and have rights to sell those artists' works but the product they sell id already a "fake" or a technological artifact. That's not to say that there isn't beauty in many of those recordings but there is no reason that people won't find value and beauty created by AI too but it will change who ends up with the money.
This ultimately needn't be bad for music companies as they will just develop their own in-house AIs trained on their own catalogues, but the musicians will lose distribution and will have to (as they did before recorded music) make a living at live performances. I guess the big problem is that music as a commodity will be further devalued and the people that make it will earn less but that is the same trajectory with all of our technology. Once upon a time hundreds of men were employed to dig ditches - until the steam shovel put them out of a job. Many thousands of people in the past were employed to sum up columns of numbers for banks and businesses and science - jobs which were heavily impacted by mechanical calculators and then by computers.
I suspect that more of the money will move to the producers who adopt this technology and have the appropriate talents to use it to create hits in the same sort of way that music producers currently make use of all of the music technologies they currently employ.
 

AI Music Generator Continues to Pump Out Over 20,000 Songs Per Day​

I can't access that link here but I did read about this earlier.
Non-human created music removed because it was being listened to by non-human bots. What a time to be alive.
 
Back
Top