In this episode, I go through our latest announcements on AWS Textract, Amazon Polly. AWS DeepLens, Amazon SageMaker, and the AWS Deep Learning Composers. A couple of small demos are included.
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️
For more content:
* AWS blog: https://aws.amazon.com/blogs/aws/auth...
* Medium blog: https://medium.com/@julsimon
* YouTube: https://youtube.com/juliensimonfr
* Podcast: http://julsimon.buzzsprout.com
* Twitter https://twitter.com/@julsimon
speaker 0: 0:00
Hi. This is Julian from edible. Yes. Welcome to Episode 16 Off my podcast. Please subscribe to my channel to be notified if you two videos. I hope you're doing okay in those difficult times, I hope your savior. Plenty of food and entertainment, as you can see and things perfectly. Okay, here. I made some new friends. Um, not much of a conversation with those guys, but they're generally pretty friendly so far. So let's here it goes. Anyway, This week, I'm going to go through some edible. It's news and announcements. So let's get so what do you have this week? Let's start with a new feature in Amazon. Textract Textract is a high level service that lets you accurately extract text and structure from forms, documents, et cetera, and ah, now it's even more accurate for check boxes and selection elements. So Well, let's look at an example. So this is the kind of document you would wanna use with textract. So this is actually a health insurance form. So it has a lot of boxes, as you can see, and, um, and the purpose, obviously, is to detect whether a box has bean ticked or not, and, well, this is the kind of thing that text right can do can do accurately. So it's gonna be able to find those boxes first of all, and and then it's gonna be able to figure out if a box has been selected or not. And this will reflect in the information that you get from the Jays and a P I. Right. So this is a cool future. Because, of course, this is a super popular use case for textract. Okay, let's look at the next one. So the next one is one of my favorite service is Amazon. Polly. Polly is text to speech, and ah, and a while ago will launch this feature called and use Castor style. So let me explain it because you missed it. So the newscaster style is the ability to apply a news style to the speech that's generated on starting from just text. Okay. And this is made possible by a new text to speech engine in Polly called a neural engine, where the sound file the wave form that you listen to is actually generated by a deep running network. So since it's generated, we can apply styles as Well, so a while ago, we launched a newscaster style for English, and now it's available for us Spanish. So let me show you how to use it. Super simple. So you would just go to Ah, here, the poly console, Or use the A p I and you need to use and s S m l syntax. Okay, so this is the syntax we need. Okay. So speak. And then that Amazon domain name tag with the value news saying we want to play the style off the newscaster. We're going to select the neural engine, and we're going to select Spanish. Okay? And we can see supported languages right now, our, um British, English, US, English, Brazilian, Portuguese, and us Spanish. Okay, So make sure you use the s S m o syntax, right, And you need to close those tags. Obviously. Make sure you use the neural engine and then select the language. Okay? And let's try and do this in a sense. CIA algo para So come beyond your messy Albarran. No basado en la Copa America. No computer ever Sotelo Cantina. Summer Children. When I see you've seen lt below Perrot, Minot, Timmy though, Excellency, also a second banana be also a una Charla frente sauce company. Eros, Sheikhupura Video, Paris and No Can't, Avellino said. This Cobra Jacomo people said Cantonal Best Mario The selfie intent lost microphone knows so Libya company said Ellen Weber Canyon. Assume Melody Celestine, Concentra Si only Monsanto and Lassana Prensa. A still photo of the capital. So the corrupcion alike on level. All right, so I don't speak Spanish. This is Ah, this is a news article from El Pace, leading Spanish newspaper. And as you can, hopefully here. Not only is this speech extremely natural, extremely lifelike, but it's also dynamic. And it's it's really something you would expect to hear on the radio or on TV because we have these style apply to the text. Okay, so there you go. That's Ah, the new Polly Text to speech Feature. Poor Spanish. Pretty cool. Okay, let's move on. Um, the next thing I want to talk about is, um, a whole bunch of new tutorials for deep lands. Remember D plants? Here's one, but well, did you forget about de plants? So it's a really cool device. It's ah, it's a tiny computer vision device with an Intel, an Intel board and a camera. And it's connected to AWS and S so that you can train computer vision model in the cloud, maybe elements age maker, and you can easily deploy to the camera using green grass on and then off course. You can run predictions on the video stream that's captured by the camera, and all of that happens locally. So we have no Kel prediction on the camera. And you can if you want to, you can send information through. Maybe I ot or another service back to the plow to say these are the the predictions that I made. These are the objects I detected, etcetera, etcetera. So do plants has been, has been out for a while, and I spent quite a bit of time writing about it and presenting it. And so now we have a new website for deeplens, uh, where we show you how to get started, obviously, and we also added a whole bunch off new projects and going from, you know, simple ones. Some of the projects that were already available in the in the camera two more events projects, worker safety, coffee counter, everybody needs that, and more advanced projects. Trash sorter. Wow. Okay, that's pretty nice. I need one of those. And, uh, and so you can just go and experiment with those new projects. And, you know, I guess a lot of us have some time to do that at the moment. So if you have a deep lens camera on, you haven't looked at it and played with it for a while, and then, you know, it's a good opportunity to spend some more time with D plants and and learn more about computer vision and learn more about architectures that work well for those kind of problems. So I think that's pretty cool. Let's move on. Okay. Of course. We're going to talk about Sage Maker. So, uh, the news on stage maker this week is that you can now train using G four d N A and C five n instances. So there's a long list of instances that sage maker supports for training and deployment, and we just added those those two ah, fairly recent instance family. So G four is a GPU family, as you can imagine, and it's powered by the t four, uh, gpu from in video. Um and the D thing means it as fast local storage in the shape of N V. A. Mia says the storage. So if you if you have data sets and if you train and copy those dates that locally to the training instances which I guess is the nominal scenario for sage maker, then the IO, the local I own instance, is going to be extremely fast. I mean, I've benchmark those N v M. U S cities a while ago, and they are blazingly fast. So if you want to save time, they're a good option. And the end thing means enhance networking. And on the on the larger G force, this means that she can go all the way up to 100 gigabits off network bandwidth. So if you do distributed training or if you're streaming the data set to the training instance using pipe mode, well, obviously that's pretty. That's a pretty sweet network, Ben with Thio work with, Right, So there you go G four d n t for GPU local and V M E storage, up to 100 gigabyte networking. So those pack quite a bit of a punch and the C five instances are just the latest one of the latest evolutions of the C five family. So compute optimized with intel scion IQ chips. And again, the end extension here means what up to 100 gigabit neck working so again a good fit. If you If you use CPU instances for for distributed training, you're just going to save quite a bit of time, thanks to the increased bandit. Okay, All right. And what do we have next? Well, of course, we have yet one more update to the deep learning containers. As we know, these containers are out of less maintained containers for pytorch tensorflow end Apache mxnet. And they come in a CPU configuration or a GPU configuration, and, um and then basically, you can use those containers as is to train or or predict, Um, and you can use them on the onstage maker as well. So no need to maintain your own containers. We do that for you, and I strongly encourage you to use those. They are. They're a nice time saver, and they're free to use. So that's good. You just pay for the compute instances that you that you trained them on or predict them. Um, but the containers are free to use. Okay, so here we're updating for pytorch 14 and that makes it 16 And an actual will be back at some point with pytorch 15 and mxnet 17 I mean, that's Ah, that race never stops. All right, that's it for this episode again, Please subscribe to my channel for future videos. Plenty of really cool stuff coming in the next few weeks. And again, please stay safe wherever you are. As for me, well, you know, I'm ready for anything. So those guys in the background start creating trouble, You know, I'm kind of ready. All right, Enough comedy. Stay safe and I'll see you soon. Until then, keep Ross.