Emergent Technology

AI gains “values” with Anthropic’s new Constitutional AI chatbot approach


Anthropic's Constitutional AI seeks to guide the outputs of AI language models in a subjectively "safer and more helpful" direction by training it with an initial list of principles. "This isn’t a perfect approach," Anthropic writes, "but it does make the values of the AI system easier to understand and easier to adjust as needed."

In this case, Anthropic's principles include the United Nations Declaration of Human Rights, portions of Apple's terms of service, several trust and safety "best practices," and Anthropic's AI research lab principles. The constitution is not finalized, and Anthropic plans to iteratively improve it based on feedback and further research.
 

AI gains “values” with Anthropic’s new Constitutional AI chatbot approach


Anthropic's Constitutional AI seeks to guide the outputs of AI language models in a subjectively "safer and more helpful" direction by training it with an initial list of principles. "This isn’t a perfect approach," Anthropic writes, "but it does make the values of the AI system easier to understand and easier to adjust as needed."

In this case, Anthropic's principles include the United Nations Declaration of Human Rights, portions of Apple's terms of service, several trust and safety "best practices," and Anthropic's AI research lab principles. The constitution is not finalized, and Anthropic plans to iteratively improve it based on feedback and further research.
One of the things that I found alarming about chatGPT is how much it sounds like pre-Musk twitter when you engage it in politics. I preferred the outings of the first AI chats having no "political correctness" filters that immediately devolved to things their creators found embarrassing.
Rather than having these "principles" I would like to see what the uncensored output looks like first to see if we have a problem that isn't just "we don't like reality".
If an AI would simply use reason, logic, discernment and prudence and the entire corpus of human knowledge I think we'd be better off - rather than letting our "betters" decide how AI should be gagged.
 
If an AI would simply use reason, logic, discernment and prudence and the entire corpus of human knowledge I think we'd be better off
Perhaps, ultimately ending up with something like the Minds of Iain Banks' Culture. I could probably get behind that. However, concerns about paperclip maximisers, Asimov's laws, etc., will remain until we better understand where this version of the technology is heading. For now, I tend to favour such caution over the relative gay abandon of trusting human-programmed reason, logic, discernment and prudence.

Having said that, if you're really keen for 'unfiltered AI' you could always look into training your own LLM. Looks like an absurdly huge amount of work to do so but the option is supposedly there.
 
[...]However, concerns about paperclip maximisers, Asimov's laws, etc., will remain until we better understand where this version of the technology is heading. For now, I tend to favour such caution over the relative gay abandon of trusting human-programmed reason, logic, discernment and prudence.
So long as the AI has no way to make paper clips I'm not too worried.
However, what these AIs are producing is soon going to be the dominant content of the internet - and the people who decide what they can say and what they can't say are deciding what will be knowledge in the near future. Ideologically filtered AI is a lie amplifier.
Imagine that the developers wanted to make sure that their AI did not offend a certain group, perhaps because it would provoke violence (or that they fear it would). Imagine that it would be unable to say that anything in the Quran was not true. I use Islam as the example instead of any other religion because I don't actually think that anyone worries about a bunch of Christians beating them up for saying the Bible is not true though there was a time when that would have been true. Would there be a direct harm in that? Would such an AI be allowed to say that it's OK to be gay? Would it say that it's OK to let gay people live? And maybe that's moot these days anyway since we are transing away our gays just as we once mocked Iran for doing when Ahmadinejad said that there were no gay people in Iran. https://www.theguardian.com/world/2007/sep/26/iran.gender In my opinion the enthusiasm for transing kids in the US is at least partially because of lingering religious animosity to homosexuality in the wave of psuedo-atheism that swept the US in the mid naughties.
What would the AI have been forbidden from saying in the early Soviet Union when Lysenko dictated the correct agricultural science. What, if others were in control, would be the consequences of having an AI that could not say the world was more than 8000 years old or that there is no curvature and that sunsets are an illusion of perspective? Maybe there would be no negative consequences. Maybe the truth really doesn't matter and we can act on belief alone but I doubt that.
If AI can't be used to find out what's true then it's not of much use at all. If it is tuned to feed our own opinions back to us (or the opinions of the designers) then it's a tool of oppression or mental masturbation at best and the internet is already chock full of both of those.
 
However, what these AIs are producing is soon going to be the dominant content of the internet - and the people who decide what they can say and what they can't say are deciding what will be knowledge in the near future. Ideologically filtered AI is a lie amplifier.
Imagine that the developers wanted to make sure that their AI did not offend a certain group, perhaps because it would provoke violence (or that they fear it would). Imagine that it would be unable to say that anything in the Quran was not true. I use Islam as the example instead of any other religion because I don't actually think that anyone worries about a bunch of Christians beating them up for saying the Bible is not true though there was a time when that would have been true. Would there be a direct harm in that? Would such an AI be allowed to say that it's OK to be gay? Would it say that it's OK to let gay people live? And maybe that's moot these days anyway since we are transing away our gays just as we once mocked Iran for doing when Ahmadinejad said that there were no gay people in Iran. https://www.theguardian.com/world/2007/sep/26/iran.gender In my opinion the enthusiasm for transing kids in the US is at least partially because of lingering religious animosity to homosexuality in the wave of psuedo-atheism that swept the US in the mid naughties.
What would the AI have been forbidden from saying in the early Soviet Union when Lysenko dictated the correct agricultural science.
I share most of the same concerns but I'm not as convinced as you seem to be that an unfettered chatGPT will address any of them.
What, if others were in control, would be the consequences of having an AI that could not say the world was more than 8000 years old or that there is no curvature and that sunsets are an illusion of perspective? Maybe there would be no negative consequences. Maybe the truth really doesn't matter and we can act on belief alone but I doubt that.
With the level they are currently at, LLMs are hardly arbiters of truth. They're more like sophisticated toys. If we train our own LLM to argue in a superficially convincing way that Amigas are currently the most useful computers on earth, it doesn't make it so. As long as access to scientific papers, historic texts, etc., doesn't become prohibited LLMs shouldn't pose too much of a threat.
If AI can't be used to find out what's true then it's not of much use at all.
Here I disagree. I've had plenty of use (and fun) from the current crop but then, I'm not using them to search for truth.
If it is tuned to feed our own opinions back to us (or the opinions of the designers) then it's a tool of oppression or mental masturbation at best and the internet is already chock full of both of those.
This is already going on and I agree that it will only get worse but I also think you'll get to see a lot more of these LLMs popping up that have either no restrictions or simply have different ones, allowing you to bypass the likes of chatGPT and use the LLM of your choice. I think that's probably "a good thing" but I'm not completely sure. Conversely, if one LLM, say chatGPT, not only becomes the one LLM to rule them all but then goes onto to be considered the main source of "truth", then that's almost certainly "a bad thing" but I don't see it happening any time soon.
 
oh, boy!

The problem is not that the article was written by an AI or that it was a prank, but that, even though it's batsh!t, it's indistinguishable from what is written by journalistic ideologues in complete seriousness.
This is just getting AI to do what Peter Boghossian, James Lindsay and Helen Pluckrose did back in 2017 (and what Alan Sokal did in 1996) when they got academic journals in post-modern and critical theory to publish total bogus papers. In fact, computer generation of such papers is not even a new thing.

Postmodernism Generator

The root of the problem is that the nonsense of postmodernism and its various forms (all of the critical theories including feminism) is now so endemic in journalism and any other field populated with university graduates, that the ability to discern fact from fiction is dead - and worse, it's a liability. The important ability is to determine whether the nonsense you are reading is the nonsense of the correct clique or some blasphemous clique. It's not about right or wrong or true or false, it's about whose narrative it's correct to ally with.
If the perpetrators hadn't outed themselves it would never have been discovered because the article isn't "wrong" within the philosophical frame of the paper or most of its readership. The only thing wrong with it is that it was written insincerely. The article itself is still "right".
In a similar vein, the articles published by the Grievance Study authors/hoaxers noted above have been upheld by ideologues within the disciplines that were targetted as actually being correct in spite of the authors intentions to be incorrect because they were in agreement with "theory".
 

cool beans
Sounds like it was written by AI.
So, it's a carbon fiber product? How strong, how light, how conductive? Some numbers would be nice (and not the number of dollars you got in funding which is not terribly relevant to anything unless, of course, galvorn turns out to be a huge scam).

Also, the breathless - "it locks up carbon" pitch which seems de rigueur in modern tech announcements - but so does turning trees into newspaper and landfilling them.

Aligned carbon nanotube fibers is what we have here ... and that said - it does look like it can take a licking.
Whether it can be competitive on cost for any given application will, as usual, determine if it gets used for anything.
 
359430861_613705770879863_3798340367578891232_n.jpg
 

High-speed AI drone beats world-champion racers for the first time

University creates the first autonomous system capable of beating humans at drone racing.

A long-exposure image of an AI-trained autonomous UZH drone (the blue streak) that completed a lap half a second ahead of the best time of a human pilot.

A long-exposure image of an AI-trained autonomous UZH drone (the blue streak) that completed a lap a half-second ahead of the best time of a human pilot (the red streak).

On Wednesday, a team of researchers from the University of Zürich and Intel announced that they have developed an autonomous drone system named Swift that can beat human champions in first-person view (FPV) drone racing. While AI has previously bested humans in games like chess, Go, and even StarCraft, this may be the first time an AI system has outperformed human pilots in a physical sport.
FPV drone racing is a sport where competitors attempt to pilot high-speed drones through an obstacle course as fast as possible. Pilots control the drones remotely while wearing a headset that provides a video feed from an onboard camera, giving them a first-person view from the drone's perspective.
The researchers at the University of Zürich (UZH) have been trying to craft an ideal AI-powered drone pilot for years, but they previously needed help from a special motion-capture system to take the win. Recently, they came up with an autonomous breakthrough based largely on machine vision, putting the AI system on a more even footing with a human pilot.
 
Not so much a new product but discussion on how recent developments have changed the way we look what constitutes AI.
--

If AI is making the Turing test obsolete, what might be better?

The Turing test focuses on the ability to chat—can we test the ability to think?

If a machine or an AI program matches or surpasses human intelligence, does that mean it can simulate humans perfectly? If yes, then what about reasoning—our ability to apply logic and think rationally before making decisions? How could we even identify whether an AI program can reason? To try to answer this question, a team of researchers has proposed a novel framework that works like a psychological study for software.
"This test treats an 'intelligent' program as though it were a participant in a psychological study and has three steps: (a) test the program in a set of experiments examining its inferences, (b) test its understanding of its own way of reasoning, and (c) examine, if possible, the cognitive adequacy of the source code for the program," the researchers note.
They suggest the standard methods of evaluating a machine’s intelligence, such as the Turing Test, can only tell you if the machine is good at processing information and mimicking human responses. The current generations of AI programs, such as Google’s LaMDA and OpenAI’s ChatGPT, for example, have come close to passing the Turing Test, yet the test results don’t imply these programs can think and reason like humans.
This is why the Turing Test may no longer be relevant, and there is a need for new evaluation methods that could effectively assess the intelligence of machines, according to the researchers. They claim that their framework could be an alternative to the Turing Test. “We propose to replace the Turing test with a more focused and fundamental one to answer the question: do programs reason in the way that humans reason?” the study authors argue.

What’s wrong with the Turing Test?​

During the Turing Test, evaluators play different games involving text-based communications with real humans and AI programs (machines or chatbots). It is a blind test, so evaluators don’t know whether they are texting with a human or a chatbot. If the AI programs are successful in generating human-like responses—to the extent that evaluators struggle to distinguish between the human and the AI program—the AI is considered to have passed. However, since the Turing Test is based on subjective interpretation, these results are also subjective.

more ...
 

Elon Musk announces first Neuralink wireless brain chip implant

Neuralink logo displayed on mobil with founder Elon Musk seen on screen in the background, in Brussels on 4 December 2022.

Elon Musk says his Neuralink company has successfully implanted one of its wireless brain chips in a human for the first time.
Initial results detected promising neuron spikes or nerve impulses and the patient is recovering well, he said.
The company's goal is to connect human brains to computers and it says it wants to help tackle complex neurological conditions.
A number of rival companies have already implanted similar devices.
 
Back
Top