Search algorithms | Dimension reduction | Classification algorithms | Hashing | Probabilistic data structures
In computer science, locality-sensitive hashing (LSH) is an algorithmic technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items. Hashing-based approximate nearest neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH). (Wikipedia).
From playlist IR12 Locality Sensitive Hashing
!!Con 2017: Finding Friends in High Dimensions... by Aaron Levin
Finding Friends in High Dimensions: Locality-Sensitive Hashing For Fun and Friendliness! by Aaron Levin
From playlist !!Con 2017
From playlist IR12 Locality Sensitive Hashing
Rasa Reading Group: Reformer: The Efficient Transformer (Continued)
Join Rachael as continues reading the paper "Reformer: The Efficient Transformer" (Kitaev et al 2019) after a five month break. This paper was published at the International Conference on Learning Representations (ICLR). Link to paper: https://www.aclweb.org/anthology/2020.emnlp-main.527.
From playlist Rasa Reading Group
Locality Sensitive Hashing (LSH) for Search with Shingling + MinHashing (Python)
Locality sensitive hashing (LSH) is a widely popular technique used in approximate nearest neighbor (ANN) search. The solution to efficient similarity search is a profitable one - it is at the core of several billion (and even trillion) dollar companies. LSH consists of a variety of diffe
From playlist Vector Similarity Search and Faiss Course
Rasa Reading Group: Reformer: The Efficient Transformer
Join Rachael as she starts reading the paper "Reformer: The Efficient Transformer" (Kitaev et al 2019). This paper was published at the International Conference on Learning Representations (ICLR). Link to paper: https://openreview.net/forum?id=rkgNKkHtvB Want to build your own bot? http
From playlist Rasa Reading Group
Reformer: The Efficient Transformer
The Transformer for the masses! Reformer solves the biggest problem with the famous Transformer model: Its huge resource requirements. By cleverly combining Locality Sensitive Hashing and ideas from Reversible Networks, the classically huge footprint of the Transformer is drastically reduc
From playlist Deep Learning Architectures
[ML NEWS] Apple scans your phone | Master Faces beat face recognition | WALL-E is real
#mlnews #apple #nolamarck Your update on the latest news in the AI and Machine Learning world. OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights & Biases 3:30 - Apple to scan iDevices for illegal content 14:10 - EU approves chatcontrol 15:20 - Machine Learning FAQ book 17:40 - TimeDial & Dis
From playlist All Videos
CSE 519 -- Lecture 22, Fall 2020
From playlist CSE 519 -- Fall 2020