ImageGPT (Generative Pre-training from Pixels). Connor Shorten June, 2020.
Paper abstract bibtex This video will explore the exciting new 6.8 Billion parameter ImageGPT model! The researchers show that better and larger generative models learn better representations for tasks like ImageNet classification! Thanks for watching! Please Subscribe! Paper Links: ImageGPT (Blog Post): https://openai.com/blog/image-gpt/ ImageGPT (Paper): https://cdn.openai.com/papers/Generat... A Survey of Long-term Context in Transformers: https://www.pragmatic.ml/a-survey-of-... Google TPUs: https://cloud.google.com/tpu/docs/tpus The Illustrated Transformer: http://jalammar.github.io/illustrated... PixelCNN: https://keras.io/examples/generative/... PixelCNN (Paper): https://arxiv.org/pdf/1606.05328.pdf Contrastive Predictive Coding: https://arxiv.org/pdf/1905.09272.pdf Big BiGAN: https://arxiv.org/pdf/1907.02544.pdf BERT: https://arxiv.org/pdf/1810.04805.pdf Rethinking Pre-training and Self-Training: https://arxiv.org/pdf/2006.06882.pdf
@misc{connor_shorten_imagegpt_2020,
title = {{ImageGPT} ({Generative} {Pre}-training from {Pixels})},
url = {https://www.youtube.com/watch?v=7rFLnQdl22c},
abstract = {This video will explore the exciting new 6.8 Billion parameter ImageGPT model! The researchers show that better and larger generative models learn better representations for tasks like ImageNet classification!
Thanks for watching! Please Subscribe!
Paper Links:
ImageGPT (Blog Post): https://openai.com/blog/image-gpt/
ImageGPT (Paper): https://cdn.openai.com/papers/Generat...
A Survey of Long-term Context in Transformers: https://www.pragmatic.ml/a-survey-of-...
Google TPUs: https://cloud.google.com/tpu/docs/tpus
The Illustrated Transformer: http://jalammar.github.io/illustrated...
PixelCNN: https://keras.io/examples/generative/...
PixelCNN (Paper): https://arxiv.org/pdf/1606.05328.pdf
Contrastive Predictive Coding: https://arxiv.org/pdf/1905.09272.pdf
Big BiGAN: https://arxiv.org/pdf/1907.02544.pdf
BERT: https://arxiv.org/pdf/1810.04805.pdf
Rethinking Pre-training and Self-Training: https://arxiv.org/pdf/2006.06882.pdf},
language = {en},
urldate = {2023-07-28},
author = {{Connor Shorten}},
month = jun,
year = {2020},
keywords = {\#Transformer, \#Vision, \#Youtube, /unread},
}
Downloads: 0
{"_id":"C7dW347sJYStTkCJe","bibbaseid":"connorshorten-imagegptgenerativepretrainingfrompixels-2020","author_short":["Connor Shorten"],"bibdata":{"bibtype":"misc","type":"misc","title":"ImageGPT (Generative Pre-training from Pixels)","url":"https://www.youtube.com/watch?v=7rFLnQdl22c","abstract":"This video will explore the exciting new 6.8 Billion parameter ImageGPT model! The researchers show that better and larger generative models learn better representations for tasks like ImageNet classification! Thanks for watching! Please Subscribe! Paper Links: ImageGPT (Blog Post): https://openai.com/blog/image-gpt/ ImageGPT (Paper): https://cdn.openai.com/papers/Generat... A Survey of Long-term Context in Transformers: https://www.pragmatic.ml/a-survey-of-... Google TPUs: https://cloud.google.com/tpu/docs/tpus The Illustrated Transformer: http://jalammar.github.io/illustrated... PixelCNN: https://keras.io/examples/generative/... PixelCNN (Paper): https://arxiv.org/pdf/1606.05328.pdf Contrastive Predictive Coding: https://arxiv.org/pdf/1905.09272.pdf Big BiGAN: https://arxiv.org/pdf/1907.02544.pdf BERT: https://arxiv.org/pdf/1810.04805.pdf Rethinking Pre-training and Self-Training: https://arxiv.org/pdf/2006.06882.pdf","language":"en","urldate":"2023-07-28","author":[{"firstnames":[],"propositions":[],"lastnames":["Connor Shorten"],"suffixes":[]}],"month":"June","year":"2020","keywords":"#Transformer, #Vision, #Youtube, /unread","bibtex":"@misc{connor_shorten_imagegpt_2020,\n\ttitle = {{ImageGPT} ({Generative} {Pre}-training from {Pixels})},\n\turl = {https://www.youtube.com/watch?v=7rFLnQdl22c},\n\tabstract = {This video will explore the exciting new 6.8 Billion parameter ImageGPT model! The researchers show that better and larger generative models learn better representations for tasks like ImageNet classification!\n\nThanks for watching! Please Subscribe!\n\nPaper Links:\nImageGPT (Blog Post): https://openai.com/blog/image-gpt/\nImageGPT (Paper): https://cdn.openai.com/papers/Generat...\nA Survey of Long-term Context in Transformers: https://www.pragmatic.ml/a-survey-of-...\nGoogle TPUs: https://cloud.google.com/tpu/docs/tpus\nThe Illustrated Transformer: http://jalammar.github.io/illustrated...\nPixelCNN: https://keras.io/examples/generative/...\nPixelCNN (Paper): https://arxiv.org/pdf/1606.05328.pdf\nContrastive Predictive Coding: https://arxiv.org/pdf/1905.09272.pdf\nBig BiGAN: https://arxiv.org/pdf/1907.02544.pdf\nBERT: https://arxiv.org/pdf/1810.04805.pdf\nRethinking Pre-training and Self-Training: https://arxiv.org/pdf/2006.06882.pdf},\n\tlanguage = {en},\n\turldate = {2023-07-28},\n\tauthor = {{Connor Shorten}},\n\tmonth = jun,\n\tyear = {2020},\n\tkeywords = {\\#Transformer, \\#Vision, \\#Youtube, /unread},\n}\n\n\n\n","author_short":["Connor Shorten"],"key":"connor_shorten_imagegpt_2020","id":"connor_shorten_imagegpt_2020","bibbaseid":"connorshorten-imagegptgenerativepretrainingfrompixels-2020","role":"author","urls":{"Paper":"https://www.youtube.com/watch?v=7rFLnQdl22c"},"keyword":["#Transformer","#Vision","#Youtube","/unread"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero/zzhenry2012","dataSources":["nZHrFJKyxKKDaWYM8"],"keywords":["#transformer","#vision","#youtube","/unread"],"search_terms":["imagegpt","generative","pre","training","pixels","connor shorten"],"title":"ImageGPT (Generative Pre-training from Pixels)","year":2020}