Anyway, excitement about my five minutes of robot fame aside, I did want to comment something related to the actual subject of this video :) My favorite method of generating plausible text is a simple multilayer Markov chain. Way back in my freshman year at uni, one of my assignments was to create a three-layer Markov chain trained on excerpts from the Wizard of Oz to generate a new page from the book. It was interesting. But then, we were allowed to train our Markov chain's dictionary on *any corpus* we wanted, so I chose the US Constitution. Needless to say, seeing brand-new laws come into existence at the (metaphorical) hands of a computer was extremely amusing.
The output is nonsense, but it looks quite similar to badly translated Chinglish found in the manuals for cheap eBay electronics :) I am impressed with the way this isn't just randomly sticking words together, it's actually making the words themselves, letter by letter - without even really knowing what a "word" even is.
"I find your own difference." It sounds so deep and profound! Now all we need to do in translate it into Latin, "Differentiae tuae invenio." That's one solid tattoo right there! Or it should be Chinese, if someone would like the chime in on that one?! :D
I just discovered Numberphile and Computerphile recently and appreciate everything you guys are doing here. The content is definitely top notch. I'm lovin' it.
Hey Computerphile, I love your comment at 13:43, where you asked the question "How do you watch if you basically have one hardware?". Great video, quite fun to read through those generated comments to find some that almost make sense.
Hi Max. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October. In this book, I go over the following: - intelligence as a form of information processing - basics of classical computing - basics of quantum computing - some basics of machine learning and artificial neural networks - and also share some thoughts on building general AI and dealing with the control problem
:) Looks it is. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October. In this book, I go over the following: - intelligence as a form of information processing - basics of classical computing - basics of quantum computing - some basics of machine learning and artificial neural networks - and also share some thoughts on building general AI and dealing with the control problem
I realize this is more powerful in a general sense, but to just generate text (what I know of as) a markov chain seems... easier. They're fun and extremely easy to get started with if nothing else, and I feel like playing with the generator order gives a sense of how (non)random language is.
Yeah, definitely. This isn't the best use of a recurrent neural network, but it does sort of help for demonstration purposes. But I love me some Markov chains! I always wanted to use Markov chains to generate songs based on a given artist's corpus of song lyrics, but the one time I tried to make one, I did it very crudely and naively and ended up accidentally DOS'ing a lyrics site... so I stopped XD I should get back to that one day, but do it more intelligently this time... if I ever have any free time for that anymore.
@@superdau A RNN is kinda like a Markov chain but using thousands of different states instead of one. Also each state is modulated by math/weighted connections instead of probability. It's much harder to generate with quality of RNN if the Markov Chain is working character by character.
I commented a few comments just now... actually, I'm sure my habit of commenting multiple times throughout the course of watching a video helped the AI notice me xD
Thanks, great vid. Also, would be great to hear from you about spiking neural networks. There isn't much free and quality information about spiking nets.
Well, aside from the many comments my videos are now getting to let me know about this, I'm also subscribed to Computerphile. Hence why I leave enough comments for the AI to notice me, senpai :D So I saw it! :D
Depending on the channel, it might indeed become an Artificial Stupidity. However, as this AI is trained on Computerphile comments it's probably above average.
I recently texted my partner "Give yourself 5 minutes to clear the car, it may be frosty" and my phone suggested "relationship" as the next word. Hmmm?
Loren ipsum 2.0 Seriously though, this is an interesting way of producing random text which at the same time is not entire gibberish. I could use this as is, now!
Hahaha, it generated my name in the random username part! (Yeah I've commented on multiple Computerphile videos in the past) I did not expect to see that.
Since RNNs are ideal for predicting the next thing in a long sequence, it's really fun to train them on audio (predicting the next position of the waveform based on the audio leading up to now). It's much slower than dealing with text, though. In case anyone's curious, I've made a couple of videos showing results of that (using torch-rnn, the same software Mike uses here).
I think there was a horribly expensive experiment where stocks were either bought or sold based on a neural network trying to decipher the mood of the language used related to the stock.
Hello. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October. In this book, I go over the following: - intelligence as a form of information processing - basics of classical computing - basics of quantum computing - some basics of machine learning and artificial neural networks - and also share some thoughts on building general AI and dealing with the control problem
There are several different ways that predictive text systems can be built, but it seems the most common way is actually a trie-based Markov chain system, not a neural network. The result is basically the same in the case of predicting text, though :)
I am currently working on something very similar with reading books and writing small sub stories. I am happy I am not the only one getting the number loop problem
what would happen if you tried using this as a generator for a generative adversarial network and then make the classifier as real vs generated? is it too random to get to realistic comments?
So how is this different what Markov chain does? Because to me this video sounded lot like only difference between Markov chain and neural network would be how it is made under the hood.
Yeah, this demonstration would work better (or at least be easier) with a Markov chain. But recurrent neural networks allow machines to learn more than just sequences, which Markov chains can't do.
Fun Project: Scan all tweets of a certain president. Feed into neural network until it learns to predict his next tweet's sentiment/topic/time, maybe even content. Once it's consistently throwing out tweet predictions that look like the subject could've tweeted them, port it to a game: *Who said it? AI or Human*
The comments use words that make sense, not so much if you actually read it... So actually exactly like youtube comments, damn ai is getting really good.
I imagine it would be possible to layer this with some sort of pre-processing? For example, I imagine it would be possible to parse the sentences into word objects pretty easily first, and then run the same kind of a network against words rather than letters?
I propose a new Turing test competition: A bot powered by a neural net that chats over the phone to double glazing salesmen. The winner each year will be the one that keeps them talking the longest. Shouldn't need a very deep neural net.
Has there been much work on giving such systems initial information from which to build off of? For example, one might give the system a dictionary of English words, acronyms, etc (possibly letting the system expand it to some extent) with a list of the potential (or probable) types of word (nouns, verbs, adverbs, adjectives, etc.) and even, perhaps verb tenses. Besides going letter-to-letter and word-to-word, it could start building models of overall sentence structures. This would significantly increase the complexity of the system and prescribing rules, rather than letting the system learn rules, might limit it in certain ways and would require more work initially.
12:11 I'm in the video! :O This is amazing! The robots love me!
Also... this might be a sign that I need more to do with my life besides comment on UA-cam videos...
+IceMetalPunk hey, thanks for the comments! >Sean
Hi IceMetalPunk. ☺️
Well, ask him for the code. That way, you'll never have to comment again!
The code is in the description.
Anyway, excitement about my five minutes of robot fame aside, I did want to comment something related to the actual subject of this video :)
My favorite method of generating plausible text is a simple multilayer Markov chain. Way back in my freshman year at uni, one of my assignments was to create a three-layer Markov chain trained on excerpts from the Wizard of Oz to generate a new page from the book. It was interesting. But then, we were allowed to train our Markov chain's dictionary on *any corpus* we wanted, so I chose the US Constitution. Needless to say, seeing brand-new laws come into existence at the (metaphorical) hands of a computer was extremely amusing.
I love you, baby.
Nice try computer. Your phony back story isn't fooling anyone. Well done on the improvements though. Your comments are getting better.
IceMetalPunk I am a knight and what I say to all this is Nee!
Just nee, nee, nee!
+IceMetalPunk
A little late, but if you have any excerpts, please post them?
After this video i will never be able to trust computerphile comments ever again.
Developing an AI based on UA-cam's comment section might not be the brightest idea in the world. The "I" in AI stands for intelligence.
Maybe he was developing Artificial Idiocy
I always preferred Artificial Incompetence.
Artificial ignorance.
Artificial imitation
Space doubt wins....
"I was able to want to be able to be happy."
me too... me toooo...
Same
That's an important first step.
It has reached consciousness!
watcherFox they became self aware! 😱
IceMetalPunk comments on my videos too :-D Deffo real person!
xisumavoid xisuma here wow
Cool to see you here
Hey Xisuma. Love your vids. good channel you watch here
didnt expect to see you here :D
xisumavoid Oh, hey xisuma
First UA-cam Comment
Mike is the best speaker IMO
Computerphile reply
1:25 Don't right?
do you support online learning from websites like "Free code amp" and 'Solo learn" 'Code academy".
gosh! I am soo noob
Great video!
I find your own difference.
This relayed having two first.
We all find your own difference on this blessed day.
Hmm, profound words; they art spilling from thine mouth.
I just wanted to let you know that I was just wondering if you were able to get the kids to school.
Train it on computerphile transcripts, then act out the output!
Yes! I second this entirely!!
"This video was written by an AI."
even better, train it on classic literature, read its essays at a TED Talk
Using comments as input on university computers? Would be a shame if someone would '); DROP TABLE Students;--
Hey! Leave Bobby out of this! :D
I wonder what lil Bobby has gotten himself into this time
Does it write "first"-comments?
Ebumbaya ' first
Probably.
Yes, but excessively so and at inappropriate times.
@@Treddian So... like normal youtube comments then.
In one of the screens you can see a comment which says "One!", which is kinda the same
The output is nonsense, but it looks quite similar to badly translated Chinglish found in the manuals for cheap eBay electronics :)
I am impressed with the way this isn't just randomly sticking words together, it's actually making the words themselves, letter by letter - without even really knowing what a "word" even is.
@IceMetalPunk Please drop a hello
Hello! :D
You're internet famous now!
At least within the Computerphile-watching community, and for the next week or two. But I'll take that! :D
IceMetalPunk yey!
That's amazing! how many comments have you posted lol!
I dunno why, but I really love this guy. He's very good at explaining things and he's funny. Thanks Mike!
Agreed!
I think it's his passion for explaining and kinda learning at the same time.
"I was able to want to be able to be happy" The network is trying to tell us something.
"I find your own difference." It sounds so deep and profound!
Now all we need to do in translate it into Latin, "Differentiae tuae invenio."
That's one solid tattoo right there! Or it should be Chinese, if someone would like the chime in on that one?! :D
Amazing stuff! And ironically, the fact that it outputs typos every now and then makes the comments much more realistic.
Watching this 5 years later now that we have ChatGPT and such...
and where he said that a Chatbot from this is "theoretically" possible and here we are
It would be interesting to create a neural network that would predict comment likes/dislikes based on the content of the comment.
@@dejfcold that's an over simplification
I predict your comment will receive 76 likes before it is forgotten.
@@abstractapproach634 I don't think it's gonna make it :(
@@samuelthecamel Prob 42 likes
IceMetalPunk is a UA-camr gamer who makes Minecraft let’s plays. This dude exists!
I only know them as a UA-cam viewer whose viewing habits often overlap with mine.
Well, I used to... I haven't had much time lately :( Having two jobs can really hinder a social (media) life XD
I just discovered Numberphile and Computerphile recently and appreciate everything you guys are doing here. The content is definitely top notch. I'm lovin' it.
I guess the network would output a lot of "first" as comments and a lot of "first" related replies
10:23 Neural network literally saying: "I don't think before doing"
Mike Pound, you created a NN that can lie.
first!'); DROP TABLE comments;--
Hello, little Bobby Tables :D
IceMetalPunk That should teach you to sanitize your inputs.
Clever, except that the machine learning server is probably quite remote from the administrative servers
Yea because a Doctor of computer science would use SQL HAHAHAHA... Not.
Hey Computerphile, I love your comment at 13:43, where you asked the question "How do you watch if you basically have one hardware?".
Great video, quite fun to read through those generated comments to find some that almost make sense.
Hi Max. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October.
In this book, I go over the following:
- intelligence as a form of information processing
- basics of classical computing
- basics of quantum computing
- some basics of machine learning and artificial neural networks
- and also share some thoughts on building general AI and dealing with the control problem
Bots commenting on bots..
One more comment for your neural network :) I admire your work and your ambition.
"I said yesterday I walked to the park 2 days ago."
IceMetalPunk is a chat bot helping to train a chatbot to become a chatbot.
:) Looks it is. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October.
In this book, I go over the following:
- intelligence as a form of information processing
- basics of classical computing
- basics of quantum computing
- some basics of machine learning and artificial neural networks
- and also share some thoughts on building general AI and dealing with the control problem
I won't read it because if you're rationalising what intelligence is you're misrepresenting what intelligence is
He is so good at explaining this stuff
+IceMetalPunk .. we are waiting!
Wait no longer! :D
I realize this is more powerful in a general sense, but to just generate text (what I know of as) a markov chain seems... easier. They're fun and extremely easy to get started with if nothing else, and I feel like playing with the generator order gives a sense of how (non)random language is.
A recurrent neural network looks like a "recursive" markov chain to me.
Yeah, definitely. This isn't the best use of a recurrent neural network, but it does sort of help for demonstration purposes. But I love me some Markov chains! I always wanted to use Markov chains to generate songs based on a given artist's corpus of song lyrics, but the one time I tried to make one, I did it very crudely and naively and ended up accidentally DOS'ing a lyrics site... so I stopped XD I should get back to that one day, but do it more intelligently this time... if I ever have any free time for that anymore.
@@superdau A RNN is kinda like a Markov chain but using thousands of different states instead of one. Also each state is modulated by math/weighted connections instead of probability. It's much harder to generate with quality of RNN if the Markov Chain is working character by character.
This was done by a UA-camr called CaryKH, he applies neural networks to various tasks.
Yea i love the stuff he does - like the language-recognision :)
I was about to mention. I wonder what difference there is between their methods, if any
Charles Thatisall There might be, if there is I'm voting for an epic NN battle.
"I have no ideas so I use a neural network"
why are you so mean?
Because there is better AI. I am not mean, just angry for not being heard.
Mike's the best at Computerphile videos
"I was able to want to be able to be happy". A perfectly normal UA-cam comment.
Immediatly came to see IceMetalPunk's comment.
I commented a few comments just now... actually, I'm sure my habit of commenting multiple times throughout the course of watching a video helped the AI notice me xD
Here! aspart of the DOG 'POUND' fandom for computer science knowledge smackdowns!!!
how many times does 'Hitler' pops up in those generated comments
Too often #Godwin’s law
"I find your own difference" is amazingly deep!
Extremely cool demonstration! I find neural networks so fascinating.
surprised it didn't say "WE LOVE YOU MIKE" all the time
Thanks, great vid. Also, would be great to hear from you about spiking neural networks. There isn't much free and quality information about spiking nets.
Mike is one super enthusiastic computer scientist! Go Mike!
please keep on making videos. it really really really helps.
We welcome our new AI overlord! 👑
Cool :) I also did this a while ago when I learned about Andrew Karpathy's blog, to generate song lyrics with using a certain music style as input.
Where's IceMetalPunk? Someone let them know.
Well, aside from the many comments my videos are now getting to let me know about this, I'm also subscribed to Computerphile. Hence why I leave enough comments for the AI to notice me, senpai :D So I saw it! :D
Isn't trying to generate an AI trained on UA-cam comments a bit counterproductive?
Depending on the channel, it might indeed become an Artificial Stupidity. However, as this AI is trained on Computerphile comments it's probably above average.
Daan Wilmer tru nuff, it was a joke tho.
No, because even it the bot replicates natural stupidity, you've proved that you have an algorithm than can replicate human behaviour accurately.
Automatic writing to a whole new level! André Breton would be proud!
"I find your own difference." -Hessil200, 2017
10:24 "I was able to want to be able to be happy."
I recently texted my partner "Give yourself 5 minutes to clear the car, it may be frosty" and my phone suggested "relationship" as the next word. Hmmm?
Loren ipsum 2.0
Seriously though, this is an interesting way of producing random text which at the same time is not entire gibberish. I could use this as is, now!
Nice to see you using Lua for this! Just recently began learning the language
I would have trained the thing on the comments of just one video, and I would have had only CAPS. Other than that, good job!
IceMetalPunk: You've been targeted for termination.
D: NO, I welcome my robot overlords! I'm honored they picked me and I will work with them however they see fit! Don't kill me!
The craziest thing is - that most actual comments look like that to me.
Hahaha, it generated my name in the random username part! (Yeah I've commented on multiple Computerphile videos in the past) I did not expect to see that.
Since RNNs are ideal for predicting the next thing in a long sequence, it's really fun to train them on audio (predicting the next position of the waveform based on the audio leading up to now). It's much slower than dealing with text, though. In case anyone's curious, I've made a couple of videos showing results of that (using torch-rnn, the same software Mike uses here).
Wow its better than mine the chatbot my team made only could successfully achieve contextual awareness. we used LSTMs (a type of node for RNNs)
This video was spectacular.
- A real person
Gentlemen, our work here is done. Computerphile will now thrive on its own.
Bzzz So from now on, all the comments will be written by either ladies or bots?
It's matching his idea professional, and probably about it. After creators and governments is made about them.
Gratz IceMetalPunk You win the comments...! xD
Extremely belated thank you!
I'm glad I have contributed to the advancement of automatic Internet trolling.
Great video. Phone and so, crunching (with the jump).
I bought a dinosaur from a vending machine, but I was unable to transport it home on my unicycle.
Auto-completion has been given too much power. It must be stopped...
Carykh
12:12 So did IceMetalPunk ever appear? Did the happy reunion ever happen? I so want to know!
I did appear! :D
Great video!
This guy needs his own channel.
An AI with the average intelligence of a UA-cam commenter...?
I jest, but this is really cool.
sirkowski why reply to your own comment?
I think there was a horribly expensive experiment where stocks were either bought or sold based on a neural network trying to decipher the mood of the language used related to the stock.
I love you, Mike Pound
Nice one. Looks like me using keyboard suggestions to generate test text when developing an app ^^
Hello. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October.
In this book, I go over the following:
- intelligence as a form of information processing
- basics of classical computing
- basics of quantum computing
- some basics of machine learning and artificial neural networks
- and also share some thoughts on building general AI and dealing with the control problem
There are several different ways that predictive text systems can be built, but it seems the most common way is actually a trie-based Markov chain system, not a neural network. The result is basically the same in the case of predicting text, though :)
That ASCII art is awesome
13:45 "How do you watch if you basically have one hardware" #Deep #EveryHumanIsOneHardware #RobotsWillEnslaveUsAll
This is the greatest solution of life changing information and the best of /.
This is the same thing that powers the subreddit simulator! that sub actually gives nice plausible posts and stories! ( sometimes )
He's working with the phone customisation.
I am currently working on something very similar with reading books and writing small sub stories. I am happy I am not the only one getting the number loop problem
what would happen if you tried using this as a generator for a generative adversarial network and then make the classifier as real vs generated? is it too random to get to realistic comments?
So how is this different what Markov chain does? Because to me this video sounded lot like only difference between Markov chain and neural network would be how it is made under the hood.
Yeah, this demonstration would work better (or at least be easier) with a Markov chain. But recurrent neural networks allow machines to learn more than just sequences, which Markov chains can't do.
Online discourse will slowly become a Turing test.
How do I know you're not a bot?
Now teach a neural network to detect sarcasm.
I was, but now it has been even more excellent hardware prediction of information that we are to be.
Great video.
Beep Boop.
Every time I see Mike's desk I have to wonder how he works with computers and only has one monitor
Anders Rochester must be so embarrassed for the "thonk" at 8:40
Now I really want to make a neural network that automatically replies to all my emails.
"Isn't being themselves to work"? Woah, 42-level sense answer to the purpose of life right there! :D
Fun Project:
Scan all tweets of a certain president.
Feed into neural network until it learns to predict his next tweet's sentiment/topic/time, maybe even content.
Once it's consistently throwing out tweet predictions that look like the subject could've tweeted them, port it to a game:
*Who said it? AI or Human*
I would be more likely to support to nonsense of a subpar AI than the nonsense of a certain president...
AI for the president. Or is he already?
There is RNNs far more powerful than Donald Trump. I think he only memorizes his last three words.
The comments use words that make sense, not so much if you actually read it... So actually exactly like youtube comments, damn ai is getting really good.
theres a guy trying to make a neutral network learn Super Mario World and (now) Mario Kart (SNES), pretty interesting.
Might that "guy" be SethBling? :D
There are more effective comment generators than this.
I like how one of the comments mentions r/totallynotrobots
I imagine it would be possible to layer this with some sort of pre-processing? For example, I imagine it would be possible to parse the sentences into word objects pretty easily first, and then run the same kind of a network against words rather than letters?
I propose a new Turing test competition: A bot powered by a neural net that chats over the phone to double glazing salesmen. The winner each year will be the one that keeps them talking the longest. Shouldn't need a very deep neural net.
It bears an eerie resemblance to fluent aphasia.
Now IceMetalPunk will appear even more often. :D Also, just to confuse the network: Comment kompjuterfeil Comment Reply Reply Comment :P
I mean, I already watch every Computerphile video and comment at least once on most of them... how much more often can I appear? XD
Has there been much work on giving such systems initial information from which to build off of? For example, one might give the system a dictionary of English words, acronyms, etc (possibly letting the system expand it to some extent) with a list of the potential (or probable) types of word (nouns, verbs, adverbs, adjectives, etc.) and even, perhaps verb tenses. Besides going letter-to-letter and word-to-word, it could start building models of overall sentence structures.
This would significantly increase the complexity of the system and prescribing rules, rather than letting the system learn rules, might limit it in certain ways and would require more work initially.
Now determine which video each comment came from.