Powered by RND
PodcastsCiênciaData Science Decoded
Ouça Data Science Decoded na aplicação
Ouça Data Science Decoded na aplicação
(1 079)(250 081)
Guardar rádio
Despertar
Sleeptimer

Data Science Decoded

Podcast Data Science Decoded
Mike E
We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know th...
Ver mais

Episódios Disponíveis

5 de 17
  • Data Science #17 - The Monte Carlo Algorithm (1949)
    We review the original Monte Carlo paper from 1949 by Metropolis, Nicholas, and Stanislaw Ulam. "The monte carlo method." Journal of the American statistical association 44.247 (1949): 335-341. The Monte Carlo method uses random sampling to approximate solutions for problems that are too complex for analytical methods, such as integration, optimization, and simulation. Its power lies in leveraging randomness to solve high-dimensional and nonlinear problems, making it a fundamental tool in computational science. In modern data science and AI, Monte Carlo drives key techniques like Bayesian inference (via MCMC) for probabilistic modeling, reinforcement learning for policy evaluation, and uncertainty quantification in predictions. It is essential for handling intractable computations in machine learning and AI systems. By combining scalability and flexibility, Monte Carlo methods enable breakthroughs in areas like natural language processing, computer vision, and autonomous systems. Its ability to approximate solutions underpins advancements in probabilistic reasoning, decision-making, and optimization in the era of AI and big data.
    --------  
    38:11
  • Data Science #16 - The First Stochastic Descent Algorithm (1952)
    In the 16th episode we go over the seminal the 1952 paper titled: "A stochastic approximation method." The annals of mathematical statistics (1951): 400-407, by Robbins, Herbert and Sutton Monro. The paper introduced the stochastic approximation method, a groundbreaking iterative technique for finding the root of an unknown function using noisy observations. This method enabled real-time, adaptive estimation without requiring the function’s explicit form, revolutionizing statistical practices in fields like bioassay and engineering. Robbins and Monro’s work laid the ideas behind stochastic gradient descent (SGD), the primary optimization algorithm in modern machine learning and deep learning. SGD’s efficiency in training neural networks through iterative updates is directly rooted in this method. Additionally, their approach to handling binary feedback inspired early concepts in reinforcement learning, where algorithms learn from sparse rewards and adapt over time. The paper's principles are fundamental to nonparametric methods, online learning, and dynamic optimization in data science and AI today. By enabling sequential, probabilistic updates, the Robbins-Monro method supports adaptive decision-making in real-time applications such as recommender systems, autonomous systems, and financial trading, making it a cornerstone of modern AI’s ability to learn in complex, uncertain environments.
    --------  
    42:20
  • Data Science #15 - The First Decision Tree Algorithm (1963)
    the 15th episode we went over the paper "Problems in the Analysis of Survey Data, and a Proposal" by James N. Morgan and John A. Sonquist from 1963. It highlights seven key issues in analyzing complex survey data, such as high dimensionality, categorical variables, measurement errors, sample variability, intercorrelations, interaction effects, and causal chains. These challenges complicate efforts to draw meaningful conclusions about relationships between factors like income, education, and occupation. To address these problems, the authors propose a method that sequentially splits data by identifying features that reduce unexplained variance, much like modern decision trees. The method focuses on maximizing explained variance (SSE), capturing interaction effects, and accounting for sample variability. It handles both categorical and continuous variables while respecting logical causal priorities. This paper has had a significant influence on modern data science and AI, laying the groundwork for decision trees, CART, random forests, and boosting algorithms. Its method of splitting data to reduce error, handle interactions, and respect feature hierarchies is foundational in many machine learning models used today. Link to full paper at our website: https://datasciencedecodedpodcast.com/episode-15-the-first-decision-tree-algorithm-1963
    --------  
    36:35
  • Data Science #14 - The original k-means algorithm paper review (1957)
    At the 14th episode we go over the Stuart Lloyd's 1957 paper, "Least Squares Quantization in PCM," (which was published only at 1982) The k-means algorithm can be traced back to this paper. Loyd introduces an approach to quantization in pulse-code modulation (PCM). Which is like a 1-D k means clustering. Lloyd discusses how quantization intervals and corresponding quantum values should be adjusted based on signal amplitude distributions to minimize noise, improving efficiency in PCM systems. He derives an optimization framework that minimizes quantization noise under finite quantization schemes. Lloyd’s algorithm bears significant resemblance to the k-means clustering algorithm, both seeking to minimize a sum of squared errors. In Lloyd's method, the quantization process is analogous to assigning data points (signal amplitudes) to clusters (quantization intervals) based on proximity to centroids (quantum values), with the centroids updated iteratively based on the mean of the assigned points. This iterative process of recalculating quantization values mirrors k-means’ recalculation of cluster centroids. While Lloyd’s work focuses on signal processing in telecommunications, its underlying principles of optimizing quantization have clear parallels with the k-means method used in clustering tasks in data science. The paper's influence on modern data science is profound. Lloyd's algorithm not only laid the groundwork for k-means but also provided a fundamental understanding of quantization error minimization, critical in fields such as machine learning, image compression, and signal processing. The algorithm's simplicity, combined with its iterative nature, has led to its wide adoption in various data science applications. Lloyd's work remains a cornerstone in both the theory of clustering algorithms and practical applications in signal and data compression technologies.
    --------  
    46:57
  • Data Science #13 - Kolmogorov complexity paper review (1965) - Part 2
    In the 14th episode we review the second part of Kolmogorov's seminal paper: Three approaches to the quantitative definition of information’." Problems of information transmission 1.1 (1965): 1-7. The paper introduces algorithmic complexity (or Kolmogorov complexity), which measures the amount of information in an object based on the length of the shortest program that can describe it. This shifts focus from Shannon entropy, which measures uncertainty probabilistically, to understanding the complexity of structured objects. Kolmogorov argues that systems like texts or biological data, governed by rules and patterns, are better analyzed by their compressibility—how efficiently they can be described—rather than by random probabilistic models. In modern data science and AI, these ideas are crucial. Machine learning models, like neural networks, aim to compress data into efficient representations to generalize and predict. Kolmogorov complexity underpins the idea of minimizing model complexity while preserving key information, which is essential for preventing overfitting and improving generalization. In AI, tasks such as text generation and data compression directly apply Kolmogorov's concept of finding the most compact representation, making his work foundational for building efficient, powerful models. This is part 2 out of 2 episodes covering this paper (the first one is in Episode 12).
    --------  
    29:25

Mais podcasts de Ciência

Sobre Data Science Decoded

Sítio Web de podcast

Ouve Data Science Decoded, Ciência Pop e muitos outros podcasts de todo o mundo com a aplicação radio.pt

Obtenha a aplicação gratuita radio.pt

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

Data Science Decoded: Podcast do grupo

Radio
Aplicações
Social
v6.28.0 | © 2007-2024 radio.de GmbH
Generated: 11/18/2024 - 6:34:00 PM