AWS AI & Machine Learning Podcast

Episode 17: AWS news

May 01, 2020 Julien Simon Season 1 Episode 17
AWS AI & Machine Learning Podcast
Episode 17: AWS news
Chapters
AWS AI & Machine Learning Podcast
Episode 17: AWS news
May 01, 2020 Season 1 Episode 17
Julien Simon

In this episode, I go through our latest announcements on AWS Amazon Augmented AI, Amazon SageMaker Studio, Amazon Sagemaker, and PyTorch.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️

AWS blog posts mentioned in the podcast:
* https://aws.amazon.com/blogs/machine-learning/announcing-availability-of-inf1-instances-in-amazon-sagemaker-for-high-performance-and-cost-effective-machine-learning-inference/
* https://aws.amazon.com/blogs/aws/announcing-torchserve-an-open-source-model-server-for-pytorch/ 

For more content:
* AWS blog: https://aws.amazon.com/blogs/aws/auth...
* Medium blog: https://medium.com/@julsimon 
* YouTube: https://youtube.com/juliensimonfr 
* Podcast: http://julsimon.buzzsprout.com 
* Twitter https://twitter.com/@julsimon

Show Notes Transcript

In this episode, I go through our latest announcements on AWS Amazon Augmented AI, Amazon SageMaker Studio, Amazon Sagemaker, and PyTorch.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️

AWS blog posts mentioned in the podcast:
* https://aws.amazon.com/blogs/machine-learning/announcing-availability-of-inf1-instances-in-amazon-sagemaker-for-high-performance-and-cost-effective-machine-learning-inference/
* https://aws.amazon.com/blogs/aws/announcing-torchserve-an-open-source-model-server-for-pytorch/ 

For more content:
* AWS blog: https://aws.amazon.com/blogs/aws/auth...
* Medium blog: https://medium.com/@julsimon 
* YouTube: https://youtube.com/juliensimonfr 
* Podcast: http://julsimon.buzzsprout.com 
* Twitter https://twitter.com/@julsimon

speaker 0:   0:00
Hi, everybody. This is Julian from AWS. Welcome to Episode 17 Off my podcast. I hope you're still doing okay in those strange times, I hope you're safe wherever you are. Please don't forget to subscribe to my channel to be notified of future videos this week. Lots off. Exciting edible s news on high level services sage maker and pytorch. So it's not. Wait and let's get started. Aziz Usual. Let's start with a high level services and the big news is the general availability off Amazon augmented ai Amazon went today I was launched in preview at reinvent and now everyone can use it. So what is this service? While the service lets you build, you review work flows for recognition textract or a custom workflow. So basically, it's the way to have a human in the loop in order to examine predictions that have a low confidence score. OK, eso This is what it looks like in the council. You'll find it in the in the sagemaker console, and it's a little bit similar to sagemaker ground truth. So first you need to create a review workforce and this could be a mechanical Turk, private workforce or a vendor Workforce. Okay, And then you create a workflow. And as mentioned, the workflow can be textract recognition or a custom workflow. So the way this works is Ah, you're pushing data to, let's say textract. And if the confidence score for the textract prediction is, ah below a certain threshold that you define, then the sample is sent for a human review to your workforce. And so anyway, you get the best of both worlds. You can automate prediction with high level service or with a custom workflow, and ah, and you can make sure that low confidence scores are reviewed by humans, right? And ah, that's useful because no machine on the model will ever get Teoh 200% accuracy. So this is a really cool service, and and you should definitely try it out. What else do we have? Uh oh, yeah, one of my favorite services so transcribe medical now supports customer camera. Very as you know, by now, I'm sure transcribe medical is ah, speech to text service specialized for medical vocabulary. So it works exactly the same as as transcribe for for custom vocabulary. So, basically, you just create a text file with, um words. And and you pass that to transcribe. Just uploaded with that create vocabulary file. And And this sees how transcribe medical will transcribe your your custom work. So if you have, um, words or specific vocabulary, that means to be transcribed exactly right. You know, maybe drug names. Maybe maybe something like that. Then you can use custom vocabulary, right? So that's a good way to again improve the accuracy off your transcriptions. So that's pretty nice. Um, moving on to translate. Okay. Regional expansion. So translate batch translation is now available in Europe. London. So good news, I guess for, uh, UK and Ireland customers. What else do we have? Lex available again in London? Frankfurt, Asia Pacific. So Lex is the champ voters. So now you can use it in traditional regions. Always good to know, You know, it just cuts on late on sea. And And if you have data in those regions, of course, it's always easier to work with the local version of the service. And I guess the really big news is the general availability off sagemaker notebooks, which is the notebook element in Sagemaker studio and also sage maker Studio is now available in additional regions, which is probably the number one question I was getting these days. When do we get sage records? Studio outside off us. Cease to. Well, that's it. So so now you can use studio, of course. Still, in Ohio, in US East one, Virginia U S West to Oregon and Europe. Yeah. So here's ah sagemaker studio instance that I created in you West one, and, well, you know, it looks the same. Um Ah. And there are a few a few extra things. So, for example, if I open this notebook here, um, this collaboration feature eyes now is now available. It's one of the things we discuss that reinvent the ability Teoh, Teik notebooks, snapshots and share them. So this is now actually available. And so we also have different compute environments. During the preview, you could only use the smallest compute environment. And now you can actually use different ones CPU and GPU. So and I think if I yeah, if I look at the full list, you know, we can see a long list of compute environments that are available, so that's pretty good. So you can find the confined the exact environment that works for you, and I'm quite sure there are a few more bells and whistles that I haven't caught yet. But anyway, that's that's Regan. Use eso notebooks is now G a n more stable, I should say, and it's available in three additional regions. So now if you've never tried to drinker studio, the time is right. I think it's about time to do it. And of course, you'll find tons of videos on my YouTube channel. How to do that? Okay, let's keep going. This is another stage maker announcement. I think it's pretty important, so you can now use inferential on stage maker and inform instances. So let me explain. Inferential is a custom chip that was built by AWS to provide high throughput, low cost inference for customers really, really need to scale there their prediction infrastructure beyond what's possible with GPU instances and so that inferential chip is ah is available in INF one instances which are E. C two instances. Okay, um and we've built a specific as decay that that's you compile. You're deep learning models for informed instances and deploy them etcetera, etcetera. But so far you had to do this on the sea to and you had to pretty much build the combination and the deployment pipeline yourself and ah, well, now it's available in Sage Maker. And well, actually, I rolled the block post for it. So I'll include the link in the description, and it is a super, super simple integration. So you can see here. And ah, and the only thing you have to do is compile the model that you trained and you compile it with sage maker neo, which is pre existing capability. Um, that's designed to combine models for specific hardware architectures. And so now you can see here in fun is one of those target architectures. So you just used that neo ap I to compile the model and then you just call deploy, Okay. And of course, the instance type is going to be in, for instance. So this is this is really as simple as it gets. And again, this is one of the reasons why I, like, say, drinking so much because doing this on the C two is a much more involved process. And here it's literally two lines of code and you deploy ah, hardware optimized model on on a really, really fast custom chip. So this is a cool, really cool feature so you can check out the block post. And when it comes to frameworks, we have also pretty big announcement. So we worked with Facebook on a model server for Pytorch. And this is called torch serve. Okay, which which is really the equivalent off, I would say tensorflow serving for tensorflow. And this was ah, pretty big gap for pytorch users. Pytorch is really great for experimentation, you know, it is ah, really flexible. But when it comes to deploying models, it was missing Ah, production grade models server Teoh to serve predictions at scale. And while this gap is now is now filled by torture. And again, I wrote the book post for this, and, um, you can you can go and, uh, and read all about it. It's it's really, really easy to to install. Off course it is open source, so you can go and, uh, and grab it from that repository here. And it supports single model, multi model loading. Um, it supports https it as it as monitoring capabilities, etcetera, etcetera. It has all the production features that you would expect from from a model server. So again really, really good news because I think this was a strong ask from the Pytorch community and the, you know, I'm really happy AWS worked on this and contributed the code for parties project. So there you go. You can read all about it. All right. That's it for this week. I hope you learned a few things again. Don't forget to subscribe to my channel to be notified of future videos. And I'll see you soon with more news. Until then, lonely the zombie apocalypse and keep rocking.