With over six years of experience in applied AI and data science, I lead teams in delivering impactful machine learning solutions across diverse industries. My expertise includes deep learning, Bayesian methods, NLP, and optimization, equipping me to solve complex business challenges with advanced solutions. I am deeply committed to mentoring, sharing insights on topics through publications and conference talks. My mentoring approach combines technical rigor with practical application, empowering you to navigate the evolving landscape of data science and AI confidently.
My Mentoring Topics
- AI
- Data Science
- Analytics
- Machine Learning
- Generative AI
B.
26.September 2024I had an interesting conversation with Salman about his career journey. He also answered my questions on the latest technologies, cloud certifications and the importance of attending boot camps.
J.
15.August 2024Session was very helpful!
M.
3.August 2024Salman was very generous with his time and offered very valuable and customised feedback. He listened well and was able to provide insights across a range of topics. Very much enjoyed meeting Salman
v.
6.July 2024It was very insightful discussion with Salman. Thanks Salman and will be in touch.
You need to be logged in to schedule a session with this mentor. Please sign in here or create an account.
A Student’s Guide to Bayesian Statistics
Ben Lambert
Supported by a wealth of learning features, exercises, and visual elements as well as online video tutorials and interactive simulations, this book is the first student-focused introduction to Bayesian statistics. Without sacrificing technical integrity for the sake of simplicity, the author draws upon accessible, student-friendly language to provide approachable instruction perfectly aimed at statistics and Bayesian newcomers. Through a logical structure that introduces and builds upon key concepts in a gradual way and slowly acclimatizes students to using R and Stan software, the book covers: An introduction to probability and Bayesian inference Understanding Bayes′ rule Nuts and bolts of Bayesian analytic methods Computational Bayes and real-world Bayesian analysis Regression analysis and hierarchical methods This unique guide will help students develop the statistical confidence and skills to put the Bayesian formula into practice, from the basic concepts of statistical inference to complex applications of analyses.
ViewThe Hundred-page Machine Learning Book
Andriy Burkov
Key Insights from "The Hundred-page Machine Learning Book" by Andriy Burkov Demystification of Machine Learning: The book simplifies the complex concepts of machine learning, making it accessible for beginners as well as advanced readers. Comprehensive coverage: Despite its brevity, the book covers all the essential aspects of machine learning, including supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Practical Implementation: Burkov emphasizes the practical implementation of machine learning algorithms rather than just the theoretical concepts. Real-world examples: The book uses real-world examples to explain abstract concepts, aiding in better understanding. Python Code: The book includes Python code for different machine learning algorithms, making it easier for readers to apply the knowledge practically. Mathematical Foundations: The book provides a clear explanation of the mathematical foundations of machine learning, which are essential for understanding the algorithms. Model Evaluation: Burkov dedicates a significant part of the book to model evaluation, discussing the importance of performance metrics and validation techniques. Feature Engineering: The book highlights the significance of feature engineering in improving the performance of machine learning models. Deep Learning: The book also introduces the concept of deep learning, providing a brief overview of neural networks and their applications. Future of Machine Learning: Burkov concludes the book by discussing the future of machine learning and its potential impact on various industries. Detailed Analysis of "The Hundred-page Machine Learning Book" "The Hundred-page Machine Learning Book" by Andriy Burkov is a concise yet comprehensive guide to machine learning. This makes it an ideal resource for beginners who want to understand the basics of machine learning, as well as for experienced practitioners looking for a quick reference. The book starts by demystifying machine learning, explaining that it is not a complex, inaccessible field reserved for computer scientists or mathematicians. Instead, Burkov argues that machine learning can be understood and applied by anyone with a basic understanding of mathematics and programming. Despite its brevity, the book covers all the essential aspects of machine learning. This includes supervised learning, where the algorithm learns from labeled data; unsupervised learning, where the algorithm learns from unlabeled data; semi-supervised learning, which combines both approaches; and reinforcement learning, where the algorithm learns by interacting with its environment. One of the major strengths of the book is its emphasis on the practical implementation of machine learning algorithms. While many books focus on the theoretical aspects of machine learning, Burkov provides Python code for different algorithms, allowing readers to apply their knowledge practically. This hands-on approach is particularly useful for beginners, who often struggle to bridge the gap between theory and practice. The book also excels in using real-world examples to explain abstract concepts. For instance, Burkov uses the example of a spam filter to explain supervised learning, making it easier for readers to understand the concept. While the book is accessible for beginners, it does not shy away from the mathematical foundations of machine learning. Burkov provides a clear explanation of these foundations, making complex concepts like gradient descent and backpropagation more understandable. This is a valuable resource for readers who want to delve deeper into machine learning. Model evaluation is another significant topic covered in the book. Burkov explains the importance of performance metrics and validation techniques, showing how they can be used to assess the accuracy of a machine learning model. This is crucial for practitioners, as it helps them judge the quality of their models. The book also highlights the importance of feature engineering in improving the performance of machine learning models. Burkov explains how selecting the right features can make a significant difference in the model's performance, providing tips and techniques for effective feature engineering. In the final chapters of the book, Burkov introduces the concept of deep learning, providing a brief overview of neural networks and their applications. This serves as a good introduction to the topic, paving the way for readers to explore more advanced resources. Lastly, Burkov discusses the future of machine learning, looking at how it could impact various industries. This is particularly relevant in today's rapidly changing technological landscape, where machine learning is expected to play a pivotal role. In conclusion, "The Hundred-page Machine Learning Book" by Andriy Burkov is a concise and comprehensive introduction to machine learning. It covers all the essential aspects of machine learning, from the basics to advanced concepts, making it an ideal resource for anyone interested in this exciting field.
ViewProbabilistic Graphical Models - Principles and Techniques
Daphne Koller, Nir Friedman
A general framework for constructing and using probabilistic models of complex systems that would enable a computer to use available information for making decisions. Most tasks require a person or an automated system to reason—to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.
ViewMathematics for Machine Learning
Marc Peter Deisenroth, A. Aldo Faisal, Cheng Soon Ong
Key Facts from the Book: Mathematics is the backbone of machine learning. Machine learning is a subfield of artificial intelligence that involves training algorithms to learn from data. This book is designed to help readers understand the mathematical concepts that underlie machine learning. The book covers topics such as linear algebra, calculus, probability, and optimization. The authors assume that readers have some background in mathematics, but they also provide a review of key concepts. The book includes exercises and solutions to help readers practice and test their understanding of the material. The authors provide examples of how the mathematical concepts covered in the book are used in real-world machine learning applications. Introduction: The introduction of the book provides an overview of the importance of mathematics in machine learning. The authors explain that machine learning algorithms rely heavily on mathematical concepts such as linear algebra, calculus, probability, and optimization. They also provide examples of how these concepts are used in real-world machine learning applications, such as image recognition and natural language processing. The authors assume that readers have some background in mathematics but also provide a review of key concepts. Linear Algebra: The first section of the book covers linear algebra, which is the study of vectors and matrices. The authors explain how vectors and matrices are used to represent data in machine learning applications, and they provide a review of key concepts such as vector addition and scalar multiplication. They also cover more advanced topics such as eigenvectors and eigenvalues, which are important for understanding the behavior of some machine learning algorithms. Calculus: The second section of the book covers calculus, which is the study of rates of change and the accumulation of quantities. The authors explain how calculus is used in machine learning to optimize functions and estimate parameters. They provide a review of key concepts such as derivatives and integrals, and they also cover more advanced topics such as partial derivatives and gradient descent, which are important for understanding the behavior of some machine learning algorithms. Probability: The third section of the book covers probability, which is the study of random events. The authors explain how probability is used in machine learning to model uncertainty and make predictions. They provide a review of key concepts such as probability distributions and conditional probability, and they also cover more advanced topics such as Bayesian inference and Markov chains, which are important for understanding the behavior of some machine learning algorithms. Optimization: The fourth section of the book covers optimization, which is the study of finding the best solution to a problem. The authors explain how optimization is used in machine learning to train models and make predictions. They provide a review of key concepts such as convexity and gradient descent, and they also cover more advanced topics such as stochastic gradient descent and regularization, which are important for understanding the behavior of some machine learning algorithms. Machine Learning Basics: The fifth section of the book covers the basics of machine learning. The authors explain the different types of machine learning algorithms, such as supervised learning and unsupervised learning, and provide examples of how they are used in real-world applications. They also cover topics such as model selection and evaluation, which are important for understanding how to choose the best algorithm for a given problem. Advanced Topics: The sixth section of the book covers advanced topics in machine learning. The authors explain how deep learning algorithms work and provide examples of how they are used in applications such as image recognition and natural language processing. They also cover topics such as reinforcement learning and generative models, which are important for understanding the cutting-edge of machine learning research. Exercises: The final section of the book provides exercises and solutions to help readers practice and test their understanding of the material. The exercises range from basic calculations to more advanced topics such as implementing machine learning algorithms from scratch. The authors also provide guidance on how to approach the exercises and how to use the solutions to check their work. Conclusion: The conclusion of the book summarizes the key concepts covered in each section and provides a final overview of the importance of mathematics in machine learning. The authors emphasize that understanding the mathematical concepts underlying machine learning algorithms is essential for developing new algorithms and applications in the field. They also provide references to additional resources for readers who want to learn more about specific topics.
ViewLinear Algebra and Learning from Data
Gilbert Strang
Linear algebra and the foundations of deep learning, together at last! From Professor Gilbert Strang, acclaimed author of Introduction to Linear Algebra, comes Linear Algebra and Learning from Data, the first textbook that teaches linear algebra together with deep learning and neural nets. This readable yet rigorous textbook contains a complete course in the linear algebra and related mathematics that students need to know to get to grips with learning from data. Included are: the four fundamental subspaces, singular value decompositions, special matrices, large matrix computation techniques, compressed sensing, probability and statistics, optimization, the architecture of neural nets, stochastic gradient descent and backpropagation.
ViewStatistical Rethinking - A Bayesian Course with Examples in R and Stan
Richard McElreath
Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web Resource The book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.
ViewDeep Learning
Ian Goodfellow, Yoshua Bengio, Aaron Courville
Key Insights from "Deep Learning" The primary focus of the book is on deep learning, a subset of machine learning that aims to formulate and solve problems by leveraging large amounts of data. The book provides a comprehensive background on machine learning, introducing concepts like linear algebra, probability, and information theory that are foundational to understanding deep learning. Deep learning algorithms are based on artificial neural networks, specifically those with several hidden layers, making them "deep" structures. The book delves into the details of different types of deep architectures including: Feedforward Neural Networks, Convolutional Networks, Sequence Modeling with Recurrent and Recursive Nets, and Practical Methodology. It covers backpropagation, the primary training algorithm for neural networks. The authors discuss regularisation for deep learning, including early stopping, parameter norm penalties, dataset augmentation, noise robustness, and semi-supervised learning. Goodfellow, Bengio, and Courville explore the nuances of optimization for training deep models. The book presents a comprehensive look at convolutional networks, a class of artificial neural networks that are particularly effective for image classification tasks. The authors also explore the realm of sequence modeling, offering insights into recurrent and recursive nets. There is a focus on practical methodology, providing guidance on how to choose the right architecture, dataset, and training strategies. The book concludes by discussing research perspectives on deep learning, suggesting potential future developments in the field. An In-depth Analysis of "Deep Learning" The book "Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville is a comprehensive guide that presents an insightful overview of the rapidly developing field of deep learning. As an experienced professor in this field, I found that the authors have successfully condensed complex concepts into understandable, digestible content. The book begins by laying a strong foundation in machine learning, introducing essential concepts like linear algebra, probability, and information theory. This approach is crucial for beginners, as a solid understanding of these concepts is fundamental to grasping deep learning. A significant aspect that the authors delve into is the architecture of deep neural networks. Central to the book is the comprehensive exploration of artificial neural networks, particularly those with several hidden layers, acknowledging the depth of these structures. The authors also describe various types of deep architectures such as Feedforward Neural Networks and Convolutional Networks, offering the reader a holistic understanding of the subject. The authors' focus on backpropagation, the primary training algorithm for neural networks, offers valuable insights. They lucidly explain the backpropagation process, emphasizing its significance in adjusting weights within the network to minimize the difference between the actual and predicted outputs. Furthermore, the book offers an in-depth look at the nuances of optimization for training deep models, including topics like gradient descent and its variants, momentum, adaptive learning rates, and second-order methods. These details are crucial for implementing deep learning algorithms effectively. One of the highlights of the book is its comprehensive coverage of convolutional networks. As these networks are particularly effective for image classification tasks, the authors' exploration of this topic is both timely and relevant. They discuss the structure and functionality of convolutional networks, detailing how they emulate the human visual cortex's hierarchical pattern recognition. The authors also delve into sequence modeling, focusing on recurrent and recursive nets. This section is particularly interesting as it covers architectures designed to handle data where temporal dynamics and sequence are important, such as in language modeling or time-series prediction. The practical methodology section is another highlight, providing practical tips on how to choose the right architecture, dataset, and training strategies. This advice is invaluable for beginners and experienced practitioners alike, as it highlights the key considerations in building effective deep learning models. In conclusion, "Deep Learning" by Goodfellow, Bengio, and Courville is a comprehensive resource that offers a detailed overview of the field. It effectively bridges the gap between theory and practice, making it a valuable addition to the bookshelf of any student or practitioner interested in deep learning.
ViewAlgorithms Illuminated - The basics. Part 1
Tim Roughgarden
Algorithms are the heart and soul of computer science. Their applications range from network routing and computational genomics to public-key cryptography and machine learning. Studying algorithms can make you a better programmer, a clearer thinker, and a master of technical interviews. Algorithms Illuminated is an accessible introduction to the subject for anyone with at least a little programming experience. The exposition emphasizes the big picture and conceptual understanding over low-level implementation and mathematical details---like a transcript of what an expert algorithms tutor would say over a series of one-on-one lessons. Part 1 covers asymptotic analysis and big-O notation, divide-and-conquer algorithms and the master method, randomized algorithms, and several famous algorithms for sorting and selection.
ViewHands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - Concepts, Tools, and Techniques to Build Intelligent Systems
Aurélien Géron
Key Insights from the Book: Practical introduction to Machine Learning: The book provides a hands-on approach to learning machine learning, emphasizing practical implementation over theoretical understanding. Focus on Scikit-Learn, Keras, and TensorFlow: These three libraries are some of the most popular tools in the field of machine learning and deep learning. The book provides detailed instruction on how to use them effectively. End-to-end Machine Learning Project: The book walks the reader through a complete machine learning project, from gathering the data to training the model and evaluating its performance. Deep Learning Techniques: The book covers a variety of deep learning techniques, including convolutional neural networks, recurrent neural networks, and long short-term memory (LSTM) networks. Understanding of Neural Networks: The book aids in developing a solid understanding of neural networks and how they function. Model Evaluation and Fine-Tuning: The book goes into detail about how to evaluate a model’s performance, and how to fine-tune it to improve its accuracy. Feature Engineering: The book covers feature engineering in depth, which involves preparing the input data to make the machine learning algorithms more effective. Deployment of Machine Learning Models: The book provides guidance on how to deploy machine learning models into a production environment. Insight into the Future of AI and Machine Learning: The book discusses the future prospects and trends in AI and machine learning. Exploration of Reinforcement Learning: The book introduces the readers to reinforcement learning, a type of machine learning where an agent learns to behave in an environment, by performing certain actions and observing the results. Detailed Analysis and Summary: "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" by Aurélien Géron is an invaluable resource for anyone seeking to delve into the world of machine learning. The book provides an in-depth exploration of machine learning, deep learning, and the tools required to build intelligent systems. Unlike many other books on the subject, it emphasizes practical implementation over theoretical understanding, making it particularly suitable for those who learn best by doing. The book places a strong focus on Scikit-Learn, Keras, and TensorFlow, some of the most popular libraries in the field of machine learning and deep learning. With the help of these libraries, users can implement powerful machine and deep learning models with relative ease. The book provides comprehensive guidance on how to use these tools effectively, including the implementation of various machine learning algorithms. One of the book's most salient features is the walkthrough of an end-to-end machine learning project. From gathering and preparing the data to training the model, evaluating its performance, and fine-tuning it to improve its accuracy, readers gain practical experience in machine learning. This hands-on approach is an effective way to learn and comprehend the various stages involved in a machine learning project. Deep learning techniques form a major part of the book. It covers a variety of these techniques, including convolutional neural networks, recurrent neural networks, and long short-term memory (LSTM) networks. These techniques are essential for tasks such as image and speech recognition, and natural language processing. The book offers a solid understanding of neural networks, the backbone of many modern machine learning algorithms. It explains how these networks function and how to train them, providing readers with the knowledge they need to build their own neural networks. The book also delves into model evaluation and fine-tuning, two crucial aspects of machine learning. It explains how to evaluate a model’s performance using various metrics and how to improve its accuracy through fine-tuning. This knowledge is crucial for developing effective machine learning models. Feature engineering, another important aspect of machine learning, is covered in depth. This process involves preparing the input data to make the machine learning algorithms more effective. The book provides practical guidance on how to perform feature engineering effectively. The book also provides guidance on how to deploy machine learning models into a production environment. This involves converting the trained model into a form that can be used in real-world applications, a crucial step in the machine learning pipeline. The book concludes with a discussion on the future prospects and trends in AI and machine learning, providing readers with an insight into the direction the field is likely to take in the coming years. Lastly, the book introduces the readers to reinforcement learning, a type of machine learning where an agent learns to behave in an environment by performing certain actions and observing the results. This is a rapidly growing area in machine learning, with applications in areas such as robotics and game playing. In conclusion, "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" is a comprehensive guide that provides practical and actionable knowledge on various aspects of machine learning. Whether you are a beginner looking to enter the field or a seasoned professional seeking to update your knowledge, this book is a valuable resource that will help you understand and implement machine learning effectively.
ViewIntroducing Monte Carlo Methods with R
Christian Robert, George Casella
Computational techniques based on simulation have now become an essential part of the statistician's toolbox. It is thus crucial to provide statisticians with a practical understanding of those methods, and there is no better way to develop intuition and skills for simulation than to use simulation to solve statistical problems. Introducing Monte Carlo Methods with R covers the main tools used in statistical simulation from a programmer's point of view, explaining the R implementation of each simulation technique and providing the output for better understanding and comparison. While this book constitutes a comprehensive treatment of simulation methods, the theoretical justification of those methods has been considerably reduced, compared with Robert and Casella (2004). Similarly, the more exploratory and less stable solutions are not covered here. This book does not require a preliminary exposure to the R programming language or to Monte Carlo methods, nor an advanced mathematical background. While many examples are set within a Bayesian framework, advanced expertise in Bayesian statistics is not required. The book covers basic random generation algorithms, Monte Carlo techniques for integration and optimization, convergence diagnoses, Markov chain Monte Carlo methods, including Metropolis {Hastings and Gibbs algorithms, and adaptive algorithms. All chapters include exercises and all R programs are available as an R package called mcsm. The book appeals to anyone with a practical interest in simulation methods but no previous exposure. It is meant to be useful for students and practitioners in areas such as statistics, signal processing, communications engineering, control theory, econometrics, finance and more. The programming parts are introduced progressively to be accessible to any reader. Christian P. Robert is Professor of Statistics at Université Paris Dauphine, and Head of the Statistics Laboratory of CREST, both in Paris, France. He has authored more than 150 papers in applied probability, Bayesian statistics and simulation methods. He is a fellow of the Institute of Mathematical Statistics and the recipient of an IMS Medallion. He has authored eight other books, including The Bayesian Choice which received the ISBA DeGroot Prize in 2004, Monte Carlo Statistical Methods with George Casella, and Bayesian Core with Jean-Michel Marin. He has served as Joint Editor of the Journal of the Royal Statistical Society Series B, as well as an associate editor for most major statistical journals, and was the 2008 ISBA President. George Casella is Distinguished Professor in the Department of Statistics at the University of Florida. He is active in both theoretical and applied statistics, is a fellow of the Institute of Mathematical Statistics and the American Statistical Association, and a Foreign Member of the Spanish Royal Academy of Sciences. He has served as Theory and Methods Editor of the Journal of the American Statistical Association, as Executive Editor of Statistical Science, and as Joint Editor of the Journal of the Royal Statistical Society Series B. In addition to books with Christian Robert, he has written Variance Components, 1992, with S.R. Searle and C.E. McCulloch; Statistical Inference, Second Edition, 2001, with Roger Berger; and Theory of Point Estimation, Second Edition, 1998, with Erich Lehmann. His latest book is Statistical Design 2008.
View
L.
25.October 2024Salman was very helpful and provided me with valuable tips to support the start of my Data Science career.