Current Coursework

From WikiName
Jump to: navigation, search


Lex Fridman @ MIT

MIT 6.S099 Artificial General Intelligence

MIT AGI: Artificial General Intelligence

MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum)

MIT AGI: Future of Intelligence (Ray Kurzweil)

No back propagation in human brain. It doesn't use deep learning it uses a different architecture

fusiform gyrus - can recognize faces.

identified repeating module of about 100 neurons uses a hidden markov model

neocortex is the outer layer of the brain

Neocortex arose 100 million years ago with mammals.

Creativity and innovation one feature of the neocortex.

long short temporal memory

Ray Kurzweil invented Gmail smart reply using hierarchical model

World is hierarchical that is why evolution developed a hierarchical brain structure to understand it.

criticism of deep neural nets - heirarchy is needed.

Deep neural nets don't explain themselves very well.

cerebellum has more neurons than neocortex

if you write a signature that is controlled by the cerebellum

most of movement has migrated from cerebellum to neocortex.

calculating thinking versus meditative thinking

MIT AGI: How the brain creates emotions (Lisa Feldman Barrett)]

MIT AGI: Computational Universe (Stephen Wolfram)]

MIT AGI: Cognitive Architecture (Nate Derbinsky)]

Bigger Theme - Cognitive Architecture

Why AGI?

  • Research Questions / Goals
  • What is Cognitive Architecture?

-Prototypical Assumptions, Structures

-Representative Snapshots

  • An example of Research in SOAR?

-Human Inspiration - What to Remember, What to Forget

Common Motivations

Existential Curiosity

  • Abstract Knowledge Creation
  • Answering Challenging Questions

Cognitive Modeling

  • Understanding how a Human Brain/Mind Functions
  • Applications in Medicine

Systems Development

  • Build more capable hardware/software for replacing/augmenting human performance
  • When designing an artifact, look to examples

It's difficult to seriously address topics where most of the problems are yet to be solved, like in AGI. That's what we're trying to do with this series, systematically from all angles, with world-experts in various disciplines. Academia has stayed away from AGI because there is a large number of people who claim to have answers who really don't (in any way that can be validated). This happens with all kinds of topics that captivate our human imagination, but where we know very little. Cold fusion is one such topic. Also, time travel, teleportation, etc. Nevertheless, I feel that several advancements in deep learning, deep reinforcement learning, neuroscience, computational cognitive science, and robotics have now allowed us to revist this topic seriously EVEN THOUGH we happen to still be (in my humble opinion) very far away from knowing how to create human-level intelligence.

Without implementation and integration, it can be difficult to synthesize and generalize from diverse findings on intelligence.

Power Law of Practice

Temporal Difference Learning - in a sequential learning task how to update behavior to maximize future reward

There are regularities at multiple time scales that are productive for understanding the mind

There exist useful layers of abstraction between bands, roughly

  • Biological / Neuroscience
  • Cognitive / Rational: psychology, cognitive science
  • Social: Economics, political science, sociology
  • Cognitive Architectures typically focus on the deliberative act (though some model lower)

Decision makers can satisfice either by finding optimum solutions for a simplified world or finding satisfactory solutions to a more realistic world. Neither approach, in general, dominates the other, and both have continued to co exist

Semantic Pointer Architecture Unified Network

Chris Eliasmith

CogArch vs Deep Machine Learning

  • ML integration for perceptual processing, feature extraction, learning, actuation
  • CogArch for naturally encoding known processes in an associative fashion

-Unified Theories of Cognition Newell

-The Soar Cognitive Architecture John E. Laird

-How to Build a Brain Chris Eliasmith

-How Can the Human Mind Occur in the Physical Universe John R. Anderson

-Computational Learning Laboratory Stanford University

How important is forgetting to AGI?

MIT 6.S094: Deep Learning for Self-Driving Cars

MIT 6.S094: Deep Learning

MIT 6.S094: Deep Learning for Self-Driving Cars

MIT 6.S094: Deep Reinforcement Learning

MIT AGI: How the brain creates emotions (Lisa Feldman Barrett)]

MIT AGI: Computational Universe (Stephen Wolfram)]

MIT AGI: Cognitive Architecture (Nate Derbinsky)]

MIT AGI: OpenAI Meta-Learning and Self-Play (Ilya Sutskever)

Why does back propagation work?

Mostly a mystery

Likely due to the great variety that is found in most natural data sets

This inexplicable fact powers most of modern AI.

back propagation solves a profound computational problem: circuit search.

Neural net training = solving a neural equation.

Reinforcement learning

  • Good framework for building intelligent agents.
  • Acting to achieve goals is a key part of intelligence.
  • Can specify nearly any AI problem.
  • RL is interesting because interesting RL algorithms exist.

Policy Gradients:

  • "Just take the gradient"
  • Stable, easy to use
  • Very few tricks needed
  • On policy

Q-Learning Based:

  • Less stable, more sample efficient
  • Wont explain how it works
  • Off policy: can be trained on data generated by some other policy.

RL's potential

  • An agent running a good RL algorithm can achieve an overwhelming variety of tasks
  • This is almost the purpose of our field
  • Today: RL is mostly data inefficient, fairly bad at exploration.

Meta learning: learn to learn by solving many tasks

The dream:

Learn to learn

Train a system on many tasks

Resulting system can solve new tasks quickly.

Several Success Stories

  • Successful learning on the OMNI data set
  • Given new character, learn to recognize its class
  • Was designed as a challenge to deep learning
  • Neural Architecture Search

Zoph and Le, 2017

Learning a Heiarchy of Actions with Meta Learning Frans et al. 2017

Self Play for Physicality and Dexterity Bansal et al. 2017

Self Play: Very Rapid Increase in Performance

Science, Vol 306, Issue 5703, pp. 1903-1907

  • Learn it on a small data set, test it on a large one.

Alignment: Learning from Human Feedback. Christiano et al., 2017


  • Will likely solve the technical alignment problem
  • But what are the right goals? Political problem.

Back propagation solves the problem of circuit search

Does the brain use back propagation?

MIT AGI: Consciousness (Christof Koch)

Personal tools