AWS AI & Machine Learning Podcast

Episode 12: AWS news and demos

March 03, 2020 Julien Simon
AWS AI & Machine Learning Podcast
Episode 12: AWS news and demos
Chapters
AWS AI & Machine Learning Podcast
Episode 12: AWS news and demos
Mar 03, 2020
Julien Simon

In this episode, I go through our latest announcements on Amazon Transcribe, Amazon Rekognition, Amazon Forecast and the Deep Learning Containers. I do a couple of demos (redacting personal information in text transcripts, and extracting text from videos). Finally, I share a couple of SageMaker videos that I recently recorded.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️

Additional resources mentioned in the podcast:

This podcast is also available in video at https://youtu.be/czngW9Wkjxw

For more content, follow me on:

Show Notes Transcript

In this episode, I go through our latest announcements on Amazon Transcribe, Amazon Rekognition, Amazon Forecast and the Deep Learning Containers. I do a couple of demos (redacting personal information in text transcripts, and extracting text from videos). Finally, I share a couple of SageMaker videos that I recently recorded.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️

Additional resources mentioned in the podcast:

This podcast is also available in video at https://youtu.be/czngW9Wkjxw

For more content, follow me on:

speaker 0:   0:00
everybody. This is Julian from AWS. Welcome to Episode 12 of my podcast. Don't forget to subscribe to my channel to be notified of future videos in this episode, I'm going to go through latest announcements on service is like recognition, transcribe forecast and a couple more things off course. I will do some demos and I will share some additional resources at the end. So let's not wait. Let's do the news, Theo. Let's start with Amazon. Transcribe our speech to text service. You may remember my a piece of the one demo on profanity filtering. If you haven't seen that, recommend it here. We added the capability to automatically redact personally identifiable information. And the use case for this is of course, if you have customer calls or customer discussions that contain P I I well, you may not want those files to be stored. Um, as is right. You may have to remove P II information from the sound files or from the transcripts. Okay, so one way or another, you need to locate this information in the file and and removing, okay. And this is exactly what these future does. So I wrote the block post for this. I will include the link in the video description on Let's Do a Quick Demo here so we can see all the information that transcribe Will will detect and remove so Social Security numbers, any credit card, information, banking information and, of course, names and email addresses, et cetera, et cetera. Okay, so I recorded a short file, So let's listen to this. Good morning, everybody. My name is Julian Simon, and today I feel like sharing a whole lot of personal information with you. Let's start with my Social Security number. 12345678903 My credit card number is 652 home 0559 And my C V D code is 666 My bank account number is Triple 8056 to 98 My email address is Julian at amazon dot com, and my full number is 06329566 Double eight. Well, I think that's it. You know a lot about me, and I hope that Amazon transcribe is doing a good job at redacting that personal information away. Let's check. Okay, so obviously it's all fake. Don't worry about this. Well, I guess my CD You might just be 666 But I guess I need to check that. So that's my sound file. I put it in a necessary bucket and then I use the start transcription job, E p I, which is available in our RST case. And I wrote that bit of, ah PHP code which seemed to cause a lot of distress from my colleagues because it's a well known fact. I have no love for PHP, but, hey, you guys are using it, so I should try and use it to Okay, call this a p. I wait for a little bit and of course, we can see stuff in the console and then we can grab the output from from that command and it's adjacent Final surprise. And it has a whole bunch of information that you would normally finding transcribe. And of course he does. It's prescription with P I ay redacted. Okay, so every bit of p I I is actually replaced automatically by a P I I tag okay, and you have time stamps. So if you want to go and do additional audio editing on top of this to actually remove the that information from the stone find itself. You can absolutely do that using the time stamps and hold your editing software. Okay, so? Well, that's it for transcribe. And it's available, you know, pretty much everywhere. Okay, so that's pretty cool. Nice little future, right? Let's talk about recognition now. So recognition added yet another capability, which is detecting text in videos on. And that's a pretty useful because you may want to look for news headlines, our company names or any kind of information. Aah! Subtitles Why not? And, uh, and that's ah, that's gonna come in handy. You can also restrict the, ah, the area of the video where you want to ah, to extract text. Because maybe if you're looking for subtitles, obviously, they will be located in a very specific part. Well defined parts of the video. So you can do that and ignore text that would show up somewhere else in the video. So let's do it. Let's do a quick demo of this. Okay, so this is the recognition console, and, uh, and I've uploaded a bunch of videos here so well, ever have this one It's like I feel so desperately that you're right. All right, fine. Us for the wind. Um, so it's a short video. There isn't a lot of taxing it. Probably just a logo. The beginning here. So it's already uploaded in this three. And let's try and run a bit of code to see if we can pick Pick up the text here. So, uh, let me show you code. It's super simple. This is spy phone, because it's enough PHP for a lifetime now. And, ah, we can use the start text detection a p i to get everything going. So just pass on the S three location off your video, the bucket name, the video object name, and that's it. Right? And you get a response with a job i d. And then once the video has been processed and you can just wait for a bit or you can use an SMS notification our recognition supports that will center on SMS message or SS notification. I should say once the video has been processed and you can call the get text detection ap I passing the job i d and extracting the information. So I've done this before, and you would use it like this. Write, start text detection with the location of the video, and then just get text detection results. Okay? And this is what you get. All right, I know this is very fast, you know, it's ah, really takes no time at all. It's a short video, but it was, you know, just a few seconds. Really. I was surprised how fast it was. And so we can print out some information, eh? So we have time stamps. We have the detective detective text eso whether it's a line of o r. Word, right, we give you both, um, uh, information here, right s so we can see so we can see detective text, right? We see the time stamp. So Marvel Studios, the confidence which is very high. And then we'll tell you if it's a line of text or if it's a word of text. Because I commended out this big here that shows you the bounding box because the output gets really noisy. But you get the exact location off that line or that word. So if you're looking for specific words and they're part of a lion and you know exactly where that word is. Okay, which is what we see her, right? The Marvel Studios line and then information for each word. Okay. And of course, these are present at multiple timestamps, so you they will appear multiple times. Okay, so this is super simple. Um, and I think it's gonna come in handy in a in a lot of use cases for customers. Okay, that's it for recognition. Let's talk about Emma's own forecast. So Amazon Forecast is another high level service that lets you build, um, time Siri's prediction models from your own data set. Okay, so this is a very, very complicated problem, but I think forecast makes it very simple. Just upload your time serious data to those three. Ah, and then either use, uh, toe ml to select on. I'll go. Or you can go and pick your favorite al ago and tweak it if you know what you're doing. And then a model is trained and is deployed, and everything happens on fully manage infrastructure. Okay, So forecast is very nice, and forecasts will work from your data set. So your time Siri's data. But you can also inject additional mater data So if you're trying to forecast sales inventory sales, then you could add Mater data information on the items themselves on top off, you know, just a stock or sales value. One predict and another piece of major data you can inject is, um, is that time of the year of public already or not? And this is really useful because obviously this will massively impact the behavior of your model, right? If it's Christmas Day or if it's, Ah, New Year's Day, whatever, then these are really special days on. So maybe they're high demand days. Or maybe they're they're low. Demand is depending on your business case. But anyway, telling the model that these days are special holidays and, uh, and that the behavior of the model should be different. Well, that's useful information. So there's actually a parameter for this. When you create the predict or so when you create the model itself, you can pass it. Ah, supplementary parameter. Which is down there. Yes, so planetary features again. For now, there's only one supported, and this is the list off. Actually, it's the country code you want to build a model for, and this will factor in the list off holidays for that specific country. Okay, so now we can support up to 30 countries, including fronts right, which has lots of holiday. As you know, we never were really work here. And, uh and so now you can just add that extra information that extra major data to your models. Okay, so that's pretty cool. Still on forecast, we extended the DPR Plus I'll go. So let me explain. Like I said, when you train a model on forecast, you can either use Oto ml and let forecast pick the right. I'll go for you or you can pick the algo yourself. OK, One of those Al goes is DPR plus, which is an Amazon invented algo that was published. I will add the link in the video description and deep they are lets you build a model using ah, large number off off time. Siri's right. So if you want to train a single model on multiple Times series, this is a good other goes to use. And the basic idea here is using deep learning. DPR will extract hidden patterns Um that are, um, present in your multiple times. Siri's right. There is some relationship between this time Siri's and of course, the human eye cannot see them. The DPR will find those patterns and build a model accordingly. So this is a really, really important and powerful Argo. So what did we out here? So we added Ieper parameters. Um, the 1st 1 is one that lets you average multiple models over a single training. So kind of and then sambal technique, I guess where you can train a number of models and and then you can average the predictions from those models and, you know and sampled Prediction is a powerful technique because I guess the theory is my theory is that every model will make slightly different mistakes. So if you have multiple models predicting and then you average the result, you tend to average out that the big the big mistakes And, uh, you know, the team off models does a better job than any single model would do. Okay. Ah, the second thing is the ability to, ah change the learning rate over time s Oh, that's something we're used thio doing with deep running models, scheduling the learning rate over epochs and so you can do the same. You can have decay learning, Reggie K. So gradually decrease the learning rate over time to ah, train. Uh, more precisely over time. And the last one is Ah, it's an obscure one on. If you know exactly what this is, you probably don't need me to explain. It s oh, there's a new likelihood function, Andi. The likelihood function is basically the function that injects uncertainty in the prediction because time Siris are noisy and unpredictable, and so you need to front to that end. So based on depending on the distribution of your data, certain functions were better than others. And, well, this is one of them. Okay, So if you can't sleep tonight, I read about peace wise. Leaner. Likely would functions. Fascinating stuff again. You know what you're doing. This is kind of come in handy. Okay? What else do we have? Uh oh, yes. We have upgraded deep running containers, one of my favorite topics. So by now, you should know we have a nice collection off deep running containers that package tensorflow pytorch mxnet on a few more things and they're off the shelf. You can grab them from Amazon. E c R o R. Dr Registry Service. You can run them on your own machine. You can run them on container Service is east. Yes, Yes. You can run them on E. C two. And of course you can around them on his own sage maker. So basically, you know, catching up with the latest version. And I'm really happy to see that we have tensorflow 2.1, which is the very latest version until the next one. But we'll keep catching up. All right, that's it for the news. Now let's share some resources. I recorded a couple of videos that you might like. So the 1st 1 is actually a very popular request. And that's how do y you sage maker on my local machine. Now, don't get me wrong. Sage Maker is really about training and deploying at scale on fully manage infrastructure. But in the early stage, off your project, when your debugging your code testing your code, you wanna work locally, right? Because you just go faster. You eat right faster. You don't have to create manage infrastructure. You don't have to pay for it. You don't have to worry about any set up there so um this video will show you how to take existing coat existing notebooks and just adapt them to run them on your local machine. And, well, you'll watch the video if you're interested. But in a nutshell, this means, um uh, using an iron role for your notebook. And it means, um, having a local data. Although you could absolutely train on s street data, you need the local docker on your machine because you're going to pull Dr Containers to your local machine. And you need to set the stage maker estimator to train on your local machine. And I think that's about it, right? So it's very simple. You can take any notebook and very, very easily adapted and run it on your local machine. Ah, they're the only restriction here is that this will only work for frameworks. Okay, so tensorflow and pytorch and next night, psychic clinics tetra, and it will only work for your own containers. So if you're training with built in Argos, like DPR or all the other ones, it's not going to work because those containers are not available outside of edible. Yes, Okay, but if you're using frameworks or urine containers this will absolutely work. So again, this is You know what took me so long? This is Ah, something that you've been asking for a long, long time. Lots of you. So here it is. And, um, I also ah gave just a couple of days ago on AWS Webinar on stage maker studio where I try to show you as much as I can in 47 minutes going from the I d e to ah to running notebooks Thio hyper parameter optimization to sagemaker autopilot for Oto ml to sage maker, model D bugger and sagemaker model monitor. I mean, it's it's a pact session, and, uh, well, I got good feedback on it, and it's on YouTube. So now you can watch you too, and and learn about all the latest features. Okay, this is it for this episode. I hope you liked it. Don't forget to subscribe to my channel to be notified of future episodes, and I'll see you soon with more content. Until then,