Optimization algorithms and methods

Hyper-heuristic

A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem. There might be multiple heuristics from which one can choose for solving a problem, and each heuristic has its own strength and weakness. The idea is to automatically devise algorithms by combining the strength and compensating for the weakness of known heuristics. In a typical hyper-heuristic framework there is a high-level methodology and a set of low-level heuristics (either constructive or perturbative heuristics). Given a problem instance, the high-level method selects which low-level heuristic should be applied at any given time, depending upon the current problem state (or search stage) determined by features. (Wikipedia).

Video thumbnail

How Is the ADHD Brain Different?

If you’re online, you may notice that conversations around ADHD are everywhere. You may even be starting to wonder, as you flick from one app to the next, that you yourself may have ADHD. So in Part 1 of this series about ADHD, Julian explores what this disorder is, what’s happening in the

From playlist Seeker+

Video thumbnail

Why You Think You Might Have ADHD

Most people experience many symptoms of ADHD to some degree, and one of the most well known is the inability to stay focused. However, there are other less common symptoms of ADHD that people often struggle with. In this episode on ADHD, Julian describes these other symptoms and how they c

From playlist Seeker+

Video thumbnail

Hegel versus Marx

Another re-upload from the previous channel. This short clip comes from the 1987 interview of Peter Singer with Bryan Magee on the life and philosophical work of Hegel and Marx. I put it together quite a long time ago, but I still think it provides a relatively decent summary of some of th

From playlist Hegel

Video thumbnail

On a conjecture of Poonen and Voloch I: Probabilistic models(...) - Sawin - Workshop 1 - CEB T2 2019

Will Sawin (Columbia University) / 21.05.2019 On a conjecture of Poonen and Voloch I: Probabilistic models for counting rational points on random Fano hypersurfaces Poonen and Voloch have conjectured that almost every degree d Fano hypersur- face in Pn defined over the field of rational

From playlist 2019 - T2 - Reinventing rational points

Video thumbnail

Model Evaluation | Stanford CS224U Natural Language Understanding | Spring 2021

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai To learn more about this course visit: https://online.stanford.edu/courses/cs224u-natural-language-understanding To follow along with the course schedule and s

From playlist Stanford CS224U: Natural Language Understanding | Spring 2021

Video thumbnail

SetReplace & Fundamental Physics

Maksim Piskunov & Jonathan Gorard

From playlist Wolfram Technology Conference 2019

Video thumbnail

Hypergeometric Motives - Fernando Villegas

Fernando Villegas University Texas at Austin March 15, 2012 The families of motives of the title arise from classical one-variable hypergeometric functions. This talk will focus on the calculation of their corresponding L-functions both in theory and in practice. These L-functions provide

From playlist Mathematics

Video thumbnail

PB2 - Population-Based Bandit Optimization

Notion Link: https://ebony-scissor-725.notion.site/Henry-AI-Labs-Weekly-Update-July-15th-2021-a68f599395e3428c878dc74c5f0e1124 Chapters 0:00 Introduction 2:41 Hyperparameter Optimization 3:44 Population-Based Training 6:12 Evolution + Bayesian Optimization 8:54 ASHA 10:48 Results Thanks

From playlist AI Weekly Update - July 15th, 2021!

Video thumbnail

Stanford Seminar - Emerging risks and opportunities from large language models, Tatsu Hashimoto

Tatsu Hashimoto, Professor of Computer Science at Stanford University April 20, 2022 Large, pre-trained language models have driven dramatic improvements in performance for a range of challenging NLP benchmarks. However, these language models also present serious risks such as eroding use

From playlist Stanford CS521 - AI Safety Seminar

Video thumbnail

Gauss Prize Lecture: Compressed sensing — from blackboard to bedside — David Donoho — ICM2018

Compressed sensing — from blackboard to bedside David Donoho Abstract: In 2017, next-generation Magnetic Resonance Imaging (MRI) devices by General Electric and Siemens received US Food and Drug Administration approval, allowing them to be used in the US Health care marketplace. This year

From playlist Special / Prizes Lectures

Video thumbnail

Hegel and his Heirs

Robert Harrison and Adrian Daub discuss Georg Wilhelm Friedrich Hegel and his heirs a few years back in an episode of Entitled Opinions, a KZSU Stanford University program. http://french-italian.stanford.edu/op... Hegel was one of the most important and influential 19th century German phi

From playlist Hegel

Video thumbnail

Andrew Neitzke: ​On Hitchin’s hyperkähler metric on moduli spaces of Higgs bundles

Abstract: I will review a conjecture (joint work with Davide Gaiotto and Greg Moore) which gives a description of the hyperkähler metric on the moduli space of Higgs bundles, and recent joint work with David Dumas which has given evidence that the conjecture is true in the case of SL(2)-H

From playlist Mathematical Physics

Video thumbnail

Stanford CS330: Deep Multi-task & Meta Learning I 2021 I Lecture 16

For more information about Stanford's Artificial Intelligence professional and graduate programs visit: https://stanford.io/ai To follow along with the course, visit: http://cs330.stanford.edu/fall2021/index.html To view all online courses and programs offered by Stanford, visit: http:/

From playlist Stanford CS330: Deep Multi-Task & Meta Learning I Autumn 2021I Professor Chelsea Finn

Related pages

Particle swarm optimization | Knapsack problem | Local search (optimization) | Memetic algorithm | Learning classifier system | Reinforcement learning | Meta-optimization | Artificial intelligence | Maximum cut | Boolean satisfiability problem | No free lunch in search and optimization | Constructive heuristic | Vehicle routing problem | Quadratic assignment problem | Genetic programming | List of knapsack problems | Bin packing problem