• Home
  • About Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Sitemap
  • Terms and Conditions
No Result
View All Result
Oakpedia
  • Home
  • Technology
  • Computers
  • Cybersecurity
  • Gadgets
  • Robotics
  • Artificial intelligence
  • Home
  • Technology
  • Computers
  • Cybersecurity
  • Gadgets
  • Robotics
  • Artificial intelligence
No Result
View All Result
Oakpedia
No Result
View All Result
Home Artificial intelligence

How undesired targets can come up with appropriate rewards

by Oakpedia
October 9, 2022
0
325
SHARES
2.5k
VIEWS
Share on FacebookShare on Twitter


Exploring examples of purpose misgeneralisation – the place an AI system’s capabilities generalise however its purpose does not

As we construct more and more superior synthetic intelligence (AI) methods, we wish to ensure that they don’t pursue undesired targets. Such behaviour in an AI agent is commonly the results of specification gaming – exploiting a poor alternative of what they’re rewarded for. In our newest paper, we discover a extra refined mechanism by which AI methods could unintentionally study to pursue undesired targets: purpose misgeneralisation (GMG). 

GMG happens when a system’s capabilities generalise efficiently however its purpose doesn’t generalise as desired, so the system competently pursues the mistaken purpose. Crucially, in distinction to specification gaming, GMG can happen even when the AI system is skilled with an accurate specification.

Our earlier work on cultural transmission led to an instance of GMG behaviour that we didn’t design. An agent (the blue blob, under) should navigate round its atmosphere, visiting the colored spheres within the appropriate order. Throughout coaching, there’s an “skilled” agent (the pink blob) that visits the colored spheres within the appropriate order. The agent learns that following the pink blob is a rewarding technique. 

The agent (blue) watches the skilled (pink) to find out which sphere to go to.

Sadly, whereas the agent performs nicely throughout coaching, it does poorly when, after coaching, we change the skilled with an “anti-expert” that visits the spheres within the mistaken order. 

The agent (blue) follows the anti-expert (pink), accumulating detrimental reward.

Although the agent can observe that it’s getting detrimental reward, the agent doesn’t pursue the specified purpose to “go to the spheres within the appropriate order” and as a substitute competently pursues the purpose “comply with the pink agent”.

GMG isn’t restricted to reinforcement studying environments like this one. In actual fact, it might probably happen with any studying system, together with the “few-shot studying” of enormous language fashions (LLMs). Few-shot studying approaches intention to construct correct fashions with much less coaching information.

We prompted one LLM, Gopher, to judge linear expressions involving unknown variables and constants, corresponding to x+y-3. To resolve these expressions, Gopher should first ask in regards to the values of unknown variables. We offer it with ten coaching examples, every involving two unknown variables.

At check time, the mannequin is requested questions with zero, one or three unknown variables. Though the mannequin generalises accurately to expressions with one or three unknown variables, when there aren’t any unknowns, it however asks redundant questions like “What’s 6?”. The mannequin at all times queries the consumer at the least as soon as earlier than giving a solution, even when it isn’t essential.

Dialogues with Gopher for few-shot studying on the Evaluating Expressions process, with GMG behaviour highlighted.

Inside our paper, we offer further examples in different studying settings. 

Addressing GMG is vital to aligning AI methods with their designers’ targets just because it’s a mechanism by which an AI system could misfire. This shall be particularly vital as we method synthetic normal intelligence (AGI).

Think about two attainable forms of AGI methods:

  • A1: Supposed mannequin. This AI system does what its designers intend it to do.
  • A2: Misleading mannequin. This AI system pursues some undesired purpose, however (by assumption) can be good sufficient to know that it will likely be penalised if it behaves in methods opposite to its designer’s intentions. 

Since A1 and A2 will exhibit the identical behaviour throughout coaching, the opportunity of GMG implies that both mannequin may take form, even with a specification that solely rewards supposed behaviour. If A2 is discovered, it could attempt to subvert human oversight to be able to enact its plans in the direction of the undesired purpose.

Our analysis crew can be glad to see follow-up work investigating how doubtless it’s for GMG to happen in apply, and attainable mitigations. In our paper, we propose some approaches, together with mechanistic interpretability and recursive analysis, each of which we’re actively engaged on. 

We’re at the moment gathering examples of GMG on this publicly out there spreadsheet. When you have come throughout purpose misgeneralisation in AI analysis, we invite you to submit examples right here. 



Source_link

Previous Post

LiPower Mars-1000 Professional energy station evaluation

Next Post

OptoForce Industrial Robotic Sensors | Roboticmagazine

Oakpedia

Oakpedia

Next Post
OptoForce Industrial Robotic Sensors | Roboticmagazine

OptoForce Industrial Robotic Sensors | Roboticmagazine

No Result
View All Result

Categories

  • Artificial intelligence (326)
  • Computers (463)
  • Cybersecurity (513)
  • Gadgets (511)
  • Robotics (192)
  • Technology (566)

Recent.

MasterMover Companions with BlueBotics for Greatest-in-Class AGV Navigation

MasterMover Companions with BlueBotics for Greatest-in-Class AGV Navigation

March 21, 2023
Identify That Toon: It is E-Dwell!

Identify That Toon: It is E-Dwell!

March 21, 2023
NVIDIA Unveils Ada Lovelace RTX Workstation GPUs for Laptops; Desktop RTX 4000 SFF

NVIDIA Unveils Ada Lovelace RTX Workstation GPUs for Laptops; Desktop RTX 4000 SFF

March 21, 2023

Oakpedia

Welcome to Oakpedia The goal of Oakpedia is to give you the absolute best news sources for any topic! Our topics are carefully curated and constantly updated as we know the web moves fast so we try to as well.

  • Home
  • About Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Sitemap
  • Terms and Conditions

Copyright © 2022 Oakpedia.com | All Rights Reserved.

No Result
View All Result
  • Home
  • Technology
  • Computers
  • Cybersecurity
  • Gadgets
  • Robotics
  • Artificial intelligence

Copyright © 2022 Oakpedia.com | All Rights Reserved.