• Skip to main content
  • Skip to primary sidebar

Biz Builder Mike

You can't sail Today's boat on Yesterdays wind - Michael Noel

  • Tokenomics is not Economics – Digital CX -The Digital Transformation
  • Resume / CV – Michael Noel
  • Contact Us
  • Featured
You are here: Home / AI / A far-sighted approach to machine learning

Nov 22 2022

A far-sighted approach to machine learning

Picture two teams squaring off on a football field. The players can cooperate to achieve an objective, and compete against other players with conflicting interests. That’s how the game works.

Creating artificial intelligence agents that can learn to compete and cooperate as effectively as humans remains a thorny problem. A key challenge is enabling AI agents to anticipate future behaviors of other agents when they are all learning simultaneously.

Because of the complexity of this problem, current approaches tend to be myopic; the agents can only guess the next few moves of their teammates or competitors, which leads to poor performance in the long run. 

Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a new approach that gives AI agents a farsighted perspective. Their machine-learning framework enables cooperative or competitive AI agents to consider what other agents will do as time approaches infinity, not just over a few next steps. The agents then adapt their behaviors accordingly to influence other agents’ future behaviors and arrive at an optimal, long-term solution.

This framework could be used by a group of autonomous drones working together to find a lost hiker in a thick forest, or by self-driving cars that strive to keep passengers safe by anticipating future moves of other vehicles driving on a busy highway.

“When AI agents are cooperating or competing, what matters most is when their behaviors converge at some point in the future. There are a lot of transient behaviors along the way that don’t matter very much in the long run. Reaching this converged behavior is what we really care about, and we now have a mathematical way to enable that,” says Dong-Ki Kim, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper describing this framework.

The senior author is Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics and a member of the MIT-IBM Watson AI Lab. Co-authors include others at the MIT-IBM Watson AI Lab, IBM Research, Mila-Quebec Artificial Intelligence Institute, and Oxford University. The research will be presented at the Conference on Neural Information Processing Systems.

Video thumbnail

Play video

In this demo video, the red robot, which has been trained using the researchers’ machine-learning system, is able to defeat the green robot by learning more effective behaviors that take advantage of the constantly changing strategy of its opponent.

More agents, more problems

The researchers focused on a problem known as multiagent reinforcement learning. Reinforcement learning is a form of machine learning in which an AI agent learns by trial and error. Researchers give the agent a reward for “good” behaviors that help it achieve a goal. The agent adapts its behavior to maximize that reward until it eventually becomes an expert at a task.

But when many cooperative or competing agents are simultaneously learning, things become increasingly complex. As agents consider more future steps of their fellow agents, and how their own behavior influences others, the problem soon requires far too much computational power to solve efficiently. This is why other approaches only focus on the short term.

“The AIs really want to think about the end of the game, but they don’t know when the game will end. They need to think about how to keep adapting their behavior into infinity so they can win at some far time in the future. Our paper essentially proposes a new objective that enables an AI to think about infinity,” says Kim.

But since it is impossible to plug infinity into an algorithm, the researchers designed their system so agents focus on a future point where their behavior will converge with that of other agents, known as equilibrium. An equilibrium point determines the long-term performance of agents, and multiple equilibria can exist in a multiagent scenario. Therefore, an effective agent actively influences the future behaviors of other agents in such a way that they reach a desirable equilibrium from the agent’s perspective. If all agents influence each other, they converge to a general concept that the researchers call an “active equilibrium.”

The machine-learning framework they developed, known as FURTHER (which stands for FUlly Reinforcing acTive influence witH averagE Reward), enables agents to learn how to adapt their behaviors as they interact with other agents to achieve this active equilibrium.

FURTHER does this using two machine-learning modules. The first, an inference module, enables an agent to guess the future behaviors of other agents and the learning algorithms they use, based solely on their prior actions.

This information is fed into the reinforcement learning module, which the agent uses to adapt its behavior and influence other agents in a way that maximizes its reward.

“The challenge was thinking about infinity. We had to use a lot of different mathematical tools to enable that, and make some assumptions to get it to work in practice,” Kim says.

Winning in the long run

They tested their approach against other multiagent reinforcement learning frameworks in several different scenarios, including a pair of robots fighting sumo-style and a battle pitting two 25-agent teams against one another. In both instances, the AI agents using FURTHER won the games more often.

Since their approach is decentralized, which means the agents learn to win the games independently, it is also more scalable than other methods that require a central computer to control the agents, Kim explains.

The researchers used games to test their approach, but FURTHER could be used to tackle any kind of multiagent problem. For instance, it could be applied by economists seeking to develop sound policy in situations where many interacting entitles have behaviors and interests that change over time.

Economics is one application Kim is particularly excited about studying. He also wants to dig deeper into the concept of an active equilibrium and continue enhancing the FURTHER framework.

This research is funded, in part, by the MIT-IBM Watson AI Lab.

DON’T MISS A BEAT

Top Stories from around the world, delivered straight to your inbox. Once Weekly.

We don’t spam! Read our privacy policy https://bizbuildermike.com/anti-spam-policy/ for more info.

Check your inbox or spam folder to confirm your subscription.

A far-sighted approach to machine learning Republished from Source https://news.mit.edu/2022/multiagent-machine-learning-ai-1123 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Adam Zewe MIT News Office · Categorized: AI, MIT AI · Tagged: AI, MIT AI

Primary Sidebar

https://youtu.be/Qvad1CQ9WOM

Blockchain Weekly Rebooted –

During the Blockchain Spring 2016 to 2020 I hosted Blockchain Weekly. Each week I interviewed someone who was doing interesting things in the blockchain space. At one time we had 29k subscribers and we were consistently getting over 15k views a week on the channel. All of that went away during the lockdown, including the Gmail address that controlled that channel. Recently, I found some of the original videos on some old hard drives. So I’m reposting a few of the relevant ones while I am starting to shoot new Blockchain Weekly Episodes to be aired 1st quarter 2023. Please subscribe to bless the You Tube Algorithm, and allow me to let you know about any updates! Our Sponsor – https://BlockchainConsultants.io

The National Digital Assets Research and Development Agenda is still being worked on by the administration of United States Vice President Joe Biden, who is still in office. The White House Office of Science and Technology Policy (OSTP) has issued a request for information (RFI) dated January 26 and posted by the Federal Register. The […]

Search Here

Market Insights

  • Mars Hub Launches Independent Cosmos Application Chain
  • Binance Informs Customers of Upcoming Service Disruption
  • The Federal Home Loan Banks System is lending to cryptocurrency
  • Highland Europe raises €1bn fund for start-up investing despite slowdown
  • Elliott takes multibillion-dollar activist stake in Salesforce
  • Can Big Tech make livestreams safe?
  • Crypto Adoption Among Women on the Rise
  • The Importance of NFTs in the Web3 industry
  • Bybit CEO clarifies company’s exposure to Genesis
  • The Use of Blockchain in the Film Industry

Tags

AI (188) andrewchen (4) Biz Builder Mike (24) Blockchain (327) Crowd Funding (48) crowdfundinsider (2) entrepreneur (647) eonetwork (27) Front Page Featured (17) MIT AI (71) startupmindset (94) Technology (377) virtual reality (1) youngupstarts (145)
  • Twitter
  • Facebook
  • About Us
  • LinkedIn
  • ANTI-SPAM POLICY
  • Google+
  • API Terms and Conditions
  • RSS
  • Archive Page
  • Biz Builder Mike is all about New World Marketing
  • Cryptocurrency Exchange
  • Digital Millennium Copyright Act (DMCA) Notice
  • DMCA Safe Harbor Explained: Why Your Website Needs a DMCA/Copyright Policy
  • Marketing? Well, how hard can that be?
  • Michael Noel
  • Michael Noel CBP
  • Noels Law of decentralization

Copyright © 2023 · Altitude Pro on Genesis Framework · WordPress · Log in

en English
ar Arabiczh-CN Chinese (Simplified)nl Dutchen Englishtl Filipinofi Finnishfr Frenchde Germanit Italianko Koreanpt Portugueseru Russiansd Sindhies Spanishtr Turkishuz Uzbekyi Yiddishyo Yoruba