Powered by RND
Ouça COMPLEXITY na aplicação
Ouça COMPLEXITY na aplicação
(1 079)(250 081)
Guardar rádio
Despertar
Sleeptimer

COMPLEXITY

Podcast COMPLEXITY
Santa Fe Institute
The official podcast of the Santa Fe Institute. Subscribe now and be part of the exploration!
Ver mais

Episódios Disponíveis

5 de 117
  • Nature of Intelligence, Ep. 4: Babies vs Machines
    Guests: Linda Smith, Distinguished Professor and Chancellor's Professor, Psychological and Brain Sciences, Department of Psychological and Brain Sciences, Indiana University BloomingtonMichael Frank, Benjamin Scott Crocker Professor of Human Biology, Department of Psychology, Stanford UniversityHosts: Abha Eli Phoboo & Melanie MitchellProducer: Katherine MoncurePodcast theme music by: Mitch MignanoFollow us on:Twitter • YouTube • Facebook • Instagram • LinkedIn  • BlueskyMore info:Tutorial: Fundamentals of Machine LearningLecture: Artificial IntelligenceSFI programs: EducationBooks: Artificial Intelligence: A Guide for Thinking Humans by Melanie MitchellTalks: Why "Self-Generated Learning” May Be More Radical and Consequential Than First Appears by Linda SmithChildren’s Early Language Learning: An Inspiration for Social AI, by Michael Frank at Stanford HAIThe Future of Artificial Intelligence by Melanie MitchellPapers & Articles:“Curriculum Learning With Infant Egocentric Videos,” in NeurIPS 2023 (September 21)“The Infant’s Visual World The Everyday Statistics for Visual Learning,” by Swapnaa Jayaraman and Linda B. Smith, in The Cambridge Handbook of Infant Development: Brain, Behavior, and Cultural Context, Chapter 20, Cambridge University Press (September 26, 2020)“Can lessons from infants solve the problems of data-greedy AI?” in Nature (March 18, 2024), doi.org/10.1038/d41586-024-00713-5“Episodes of experience and generative intelligence,” in Trends in Cognitive Sciences (October 19, 2022), doi.org/10.1016/j.tics.2022.09.012“Baby steps in evaluating the capacities of large language models,” in Nature Reviews Psychology (June 27, 2023), doi.org/10.1038/s44159-023-00211-x“Auxiliary task demands mask the capabilities of smaller language models,” in COLM (July 10, 2024)“Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model,” in Cognitive Science (First published: 14 May 2024), doi.org/10.1111/cogs.13448
    --------  
    38:37
  • Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?
    Guests: Tomer Ullman, Assistant Professor, Department of Psychology, Harvard UniversityMurray Shanahan, Professor of Cognitive Robotics, Department of Computing, Imperial College London; Principal Research Scientist, Google DeepMindHosts: Abha Eli Phoboo & Melanie MitchellProducer: Katherine MoncurePodcast theme music by: Mitch MignanoFollow us on:Twitter • YouTube • Facebook • Instagram • LinkedIn  • BlueskyMore info:Tutorial: Fundamentals of Machine LearningLecture: Artificial IntelligenceSFI programs: EducationBooks: Artificial Intelligence: A Guide for Thinking Humans by Melanie MitchellThe Technological Singularity by Murray ShanahanEmbodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds by Murray ShanahanSolving the Frame Problem by Murray ShanahanSearch, Inference and Dependencies in Artificial Intelligence by Murray Shanahan and Richard SouthwickTalks: The Future of Artificial Intelligence by Melanie MitchellArtificial intelligence: A brief introduction to AI by Murray ShanahanPapers & Articles:“A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” in New York Times (Feb 16, 2023)“Bayesian Models of Conceptual Development: Learning as Building Models of the World,” in Annual Review of Developmental Psychology Volume 2 (Oct 26, 2020), doi.org/10.1146/annurev-devpsych-121318-084833“Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models,” in Findings of the Association for Computational Linguistics (December 2023), doi.org/10.18653/v1/2023.findings-emnlp.264“Role play with large language models,” in Nature (Nov 8, 2023), doi.org/10.1038/s41586-023-06647-8“Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks,” arXiv (v5, March 14, 2023), doi.org/10.48550/arXiv.2302.08399“Talking about Large Language Models,” in Communications of the ACM (Feb 12, 2024), “Simulacra as Conscious Exotica,” in arXiv (v2, July 11, 2024), doi.org/10.48550/arXiv.2402.12422
    --------  
    45:05
  • Nature of Intelligence, Ep. 2: The relationship between language and thought
    Guests: Evelina Fedorenko, Associate Professor, Department of Brain and Cognitive Sciences, and Investigator, McGovern Institute for Brain Research, MITSteve Piantadosi, Professor of Psychology and Neuroscience, and Head of Computation and Language Lab, UC BerkeleyGary Lupyan, Professor of Psychology, University of Wisconsin-MadisonHosts: Abha Eli Phoboo & Melanie MitchellProducer: Katherine MoncurePodcast theme music by: Mitch MignanoFollow us on:Twitter • YouTube • Facebook • Instagram • LinkedIn  • BlueskyMore info:Tutorial: Fundamentals of Machine LearningLecture: Artificial IntelligenceSFI programs: EducationBooks: Artificial Intelligence: A Guide for Thinking Humans by Melanie MitchellDeveloping Object Concepts in Infancy: An Associative Learning Perspective by Rakison, D.H., and G. LupyanLanguage and Mind by Noam ChomskyOn Language by Noam ChomskyTalks: The Future of Artificial Intelligence by Melanie MitchellThe language system in the human brain: Parallels & Differences with LLMs by Evelina Federenko Papers & Articles:“Dissociating language and thought in large language models,” in Trends in Cognitive Science (March 19, 2024), doi: 10.1016/j.tics.2024.01.011“The language network as a natural kind within the broader landscape of the human brain,” in Nature Reviews Neuroscience (April 12, 2024), doi.org/10.1038/s41583-024-00802-4“Visual grounding helps learn word meanings in low-data regimes,” in arXiv (v2 revised on 25 March 2024), doi.org/10.48550/arXiv.2310.13257“No evidence of theory of mind reasoning in the human language network,” in Cerebral Cortex (December 28, 2022), doi.org/10.1093/cercor/bhac505“Chapter 1: Modern language models refute Chomsky’s approach to language,” by Steve T. Piantadosi (v7, November 2023), lingbuzz/007180“Uniquely human intelligence arose from expanded information capacity,” in Nature Reviews Psychology (April 2, 2024), doi.org/10.1038/s44159-024-00283-3“Understanding the allure and pitfalls of Chomsky's acience,” Review by Gary Lupyan, in The American Journal of Psychology (Spring 2018), doi.org/10.5406/amerjpsyc.131.1.0112“Language is more abstract than you think, or, why aren’t languages more iconic?” in Philosophical Transactions of the Royal Society B (June 18, 2018), Published:18 June 2018, doi.org/10.1098/rstb.2017.0137“Does vocabulary help structure the mind?” in Minnesota Symposia on Child Psychology: Human Communication: Origins, Mechanisms, and Functions (February 27, 2021), doi.org/10.1002/9781119684527.ch6“Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks,” in Journal of Vision (December 2021), doi.org/10.1167/jov.21.13.13“Appeals to ‘Theory of Mind’ no longer explain much in language evolution,” by Justin Sulik and Gary Lupyan“Effects of language on visual perception,” in Trends in Cognitive Sciences (October 1, 2020), doi.org/10.1016/j.tics.2020.08.005“Is language-of-thought the best game in the town we live?” in Behavioral and Brain Sciences (September 28, 2023), doi:10.1017/S0140525X23001814“Can we distinguish machine learning from human learning?” in arXiv (October 8, 2019), doi.org/10.48550/arXiv.1910.03466
    --------  
    37:44
  • Nature of Intelligence, Ep. 1: What is Intelligence
     Guests: Alison Gopnik, SFI External Faculty; Professor of Psychology and Affiliate Professor of Philosophy at University of California, Berkeley; Member of Berkeley AI Research GroupJohn Krakauer, SFI External Faculty; John C. Malone Professor of Neurology, Neuroscience, and Physical Medicine & Rehabilitation, Johns Hopkins UniversityHosts: Abha Eli Phoboo & Melanie MitchellProducer: Katherine MoncurePodcast theme music by: Mitch MignanoPodcast logo by Nicholas GrahamFollow us on:Twitter • YouTube • Facebook • Instagram • LinkedIn  • BlueskyMore info:Complexity Explorer: Tutorial: Fundamentals of Machine LearningLecture: Artificial IntelligenceSFI programs: EducationBooks: Artificial Intelligence: A Guide for Thinking Humans by Melanie MitchellWords, Thoughts and Theories by Alison Gopnik and Andrew N. MeltzoffThe Scientist in the Crib: Minds, Brains, and How Children Learn by Alison Gopnik, Andrew N. Meltzoff, and Patricia K. KuhlThe Philosophical Baby: What Children's Minds Tell Us About Truth, Love, and the Meaning of Life by Alison GopnikThe Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children by Alison GopnikTalks: The Future of Artificial Intelligence by Melanie MitchellImitation Versus Innovation: What Children Can Do That Large Langauge Models’ Can’t by Alison GopnikThe Minds of Children by Alison GopnikWhat Understanding Adds to Cambrian Intelligence: A Taxonomy by John KrakauerPapers & Articles:“Why you can’t make a computer that feels pain,” by Daniel C. Dennett“Transmission versus truth, imitation versus innovation: What children can do that Large Language and Language-and-Vision models cannot (yet),” in Perspectives on Psychological Science  (October 26, 2023), doi.org/10.1177/17456916231201401“Empowerment as Causal Learning, Causal Learning as Empowerment: A bridge between Bayesian causal hypothesis testing and reinforcement learning,” by Alison Gopnik“What can AI Learn from Human Exploration? Intrinsically-Motivated Humans and Agents in Open-World Exploration” by Yuqing Du et al, for Workshop: Agent Learning in Open-Endedness Workshop, NeurIPS 2024 conference“Two views on the cognitive brain,” by David L. Barack & John W. Krakauer, Perspectives in Nature Reviews Neuroscience Vol 22 (April 15, 2021)“The intelligent reflex,” by John W. Krakauer, Philosophical Psychology (May 23, 2019), doi.org/10.1080/09515089.2019.1607281“Representation in Cognitive Science by Nicholas Shea: But Is It Thinking? The Philosophy of Representation Meets Systems Neuroscience” by John W. Krakauer
    --------  
    43:28
  • Trailer for The Nature of Intelligence
    Right now, AI is having a moment — and it’s not the first time grand predictions about the potential of machines are being made. But, what does it really mean to say something like ChatGPT is “intelligent”? What exactly is intelligence? In this season of the Complexity podcast, The Nature of Intelligence, we'll explore this question through conversations with cognitive and neuroscientists, animal cognition researchers, and AI experts in six episodes. Together, we'll investigate the complexities of human intelligence, how it compares to that of other species, and where AI fits in. We'll dive into the relationship between language and thought, examine AI's limitations, and ask: Could machines ever truly be like us?
    --------  
    3:25

Mais podcasts de Ciência

Sobre COMPLEXITY

Sítio Web de podcast

Ouve COMPLEXITY, Do Zero e muitos outros podcasts de todo o mundo com a aplicação radio.pt

Obtenha a aplicação gratuita radio.pt

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Radio
Aplicações
Social
v6.28.0 | © 2007-2024 radio.de GmbH
Generated: 11/18/2024 - 5:18:47 PM