AWS AI & Machine Learning Podcast

Episode 5

January 13, 2020 Julien Simon Season 1 Episode 5
AWS AI & Machine Learning Podcast
Episode 5
Chapters
0:30
News
8:40
Extra resources
11:13
Demo
AWS AI & Machine Learning Podcast
Episode 5
Jan 13, 2020 Season 1 Episode 5
Julien Simon

In this episode, I talk about a couple of recent features, share some extra resources, and point you to pretty cool computer vision demo.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️

News
* Amazon Comprehend - multi-label classification: https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-comprehend-launches-multi-label-custom-classification/
* Amazon Translate - batch translation: https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-translate-introduces-batch-translation/

Extra resources
* O’Reilly AI 2018 talk (SageMaker built-in algos): https://youtu.be/dGn1HAHbLWM
* O’Reilly AI 2019 talk (Introduction to NLP models): https://youtu.be/kwHGlTi27LY
* Sagify demo: https://youtu.be/cWv8zR2Qu94

Detectron2 demo
https://youtu.be/7R8-VAk0ruk

This podcast is also available in video at https://youtu.be/RE3gD1ti_0s

For more content, follow me on:
* Medium: https://medium.com/@julsimon
* Twitter: https://twitter.com/julsimon

Show Notes Transcript Chapter Markers

In this episode, I talk about a couple of recent features, share some extra resources, and point you to pretty cool computer vision demo.

⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️

News
* Amazon Comprehend - multi-label classification: https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-comprehend-launches-multi-label-custom-classification/
* Amazon Translate - batch translation: https://aws.amazon.com/about-aws/whats-new/2020/01/amazon-translate-introduces-batch-translation/

Extra resources
* O’Reilly AI 2018 talk (SageMaker built-in algos): https://youtu.be/dGn1HAHbLWM
* O’Reilly AI 2019 talk (Introduction to NLP models): https://youtu.be/kwHGlTi27LY
* Sagify demo: https://youtu.be/cWv8zR2Qu94

Detectron2 demo
https://youtu.be/7R8-VAk0ruk

This podcast is also available in video at https://youtu.be/RE3gD1ti_0s

For more content, follow me on:
* Medium: https://medium.com/@julsimon
* Twitter: https://twitter.com/julsimon

speaker 0:   0:00
Hi, this is Julian from AWS. Welcome to Episode five of My Podcast. Don't forget to subscribe to be notified of future episodes in this episode. I'm going to cover some new features that came out this year, and I'll share some resources as well, and I'll point you to a pretty cool demo that I recorded as well. So let's start with the news. It's the beginning of the year, and things are still a little bit slow. But some cool features came out recently. The 1st 1 I want to talk about is a new feature in Amazon Comprehend or a natural language processing service. So comprehend added multi label classification for custom classy fires. So that's a mouthful. Let me explain. Initially, you could do things like sentiment analyses and NTT extraction and so on with comprehend. And after that, the team added the ability to create your own custom classic fires so you could upload your own day to set and train your own text. Classifier on your own labels without worrying about the internals are infrastructure, But that was for single labels. So let's say you wanted to classify movies then you know a certain movie could only be a certain category action or drama or comedy, et cetera. Now you can have multiple labels per document, and you can train classic fires that we'll be able to assign those multiple labels to new samples. So in the example of movies again, you know, movie could be action and comedy. Or it could be drama in his story, something like that on. And if you have maybe legal documents, you could classify them with a high level category, like contract or season, diseased or term sheet, whatever. And he could have another category giving you extra details. Okay, so this multi level future gives you lots of flexibility to train and build models to classify text data in meditation ways. So let me show you quickly how this works in the council. This is the Amazon country and counsel. So that's just opened that menu. And we can see custom classification here on the left. And if you click on the train classifier button, you see that now you can train multi label documents. Okay. And here's that movie example again. So text. Well, it could be a comedy movie text. It could be a drama movie and text Me could be comedy and drama movie. Okay, so that's how you do it. Just go to the council and select multi label moron. Of course, this is also present in the AP eyes. Okay, simple enough. The next feature I want to talk about is on Amazon translate future. So translate is our translation service. You guess right and translate can outdo batch translation. Previously, you could do real time translation or translate one piece of text at a time using the AP. I now you can actually translate. Ah, a batch of documents store in an S three bucket and these documents can be in plain text or they can be html documents. So that's particularly if you have a bunch off Web pages, for example. You want to translate, and you can just put all those in s3 create a batch translation job and get all your files translated in one. Go into other languages. You like, um, one restriction, though a TTE. The moment you cannot do automatic language detection with batch processing, so you have to know in which language those input documents are. But that's not too bad. I guess Okay, let me show you quickly where they sit in the console. This is the Amazon translate console. And if we open the menu here we see a new entry for batch translation. So clicking on this and then create job, we see the parameters for a job, a name, the source language, the target language and, of course, the location of input data. The input file format, which could be text or HTML and, of course, the location of output data. And that's about it, right? So super simple. And using this, you can now translate a ton off documents in one go. That's pretty cool. The next future I want to talk about is actually a bit of, ah, a bit of a discovery that I've done while reading a P I documentation. So this wasn't formula announced, and I guess the team thought I wouldn't notice it. But guys, come on, you know me on, and I'm talking about a new future and stage maker autopilot. As you probably know, sage maker autopilot is a new capability in Amazon sage maker that lets you automatically build a model, and it's select the I'll go to magic like sexual. If you're interested in more detail, have a bunch of YouTube videos on my on my channel showing you the, uh, the autopilot work for end to end. But anyway, so the future I'm referring to is actually the metric that you ask autopilot to optimize on. So when the service launched at reinvent, you could only optimize for accuracy, right? And well, as I found out reading the documentation, you can now optimize for more metrics. So let me show you This is the documentation for the creator to ml job a p I, which is the one that we use in sagemaker autopilot to get things going. And if we look at the parameters here we see a pariah medical metric name, okay. And, uh, as I mentioned before, accuracy used to be the only metric you could optimize on. But now you can use mean square error, and you can use F one and F one macro Okay, which are two variations on the F one score. So that's an improvement. Because, of course, different problems required from metrics. And now you can pick the metric that fits your problem best. The last bit of news I want to mention is not edible. Yes, news, but it's important to a lot of you. A tensorflow 2.1 came out in the last few days and, of course, plenty of Bach fixes and future additions, et cetera. But there are really two things I want to mention first. This is the last tensorflow version that will support Python to seven. So all of you out there who have procrastinated and delayed grating you scripts to Python three. Well, bad news, right? I mean, it's now time to do it. Otherwise you'll be stuck with tensorflow 2.1 until the end of time, which maybe isn't a problem. But it's always better if you can leverage the new frameworks. So time to update those scripts. And the second thing I want to mention is that when you install tensorflow 2.1 and up, I would expect, as in, you know, peop installed tensorflow you're now installing by default the GPU enabled version. Okay, so if you that's what you won't find if you want to install it for a GPU powered machine, that's okay. If you are just wanting Thio, install it for a CPU version, Then you have to explicitly pip install chance of flow dash CPU. Okay, so the GPU version will work final on the CPU machine, of course, but it's ah, it's a little bulkier and especially if you want Thio, use it inside a container. You know, you want to have the smallest possible container than I don't forget to specifically mention that tensorflow CPU version. Okay? And you know, of course, check the release notes for all the other tiny and not so tiny editions. That's it for the news theme. The last week, I added a bunch of resource is that I think you could be interested in first. I uploaded some conference videos from the O'Reilly. Aye, aye conference in London, which was kind enough to invite me in 2018 and 2019. So the 2018 video is, uh, is a deep dive on the bill tonight goes in Amazon sage maker, showing you some benchmarks, and all of them are two very relevant. So if you're interested in why those built in Argos rock well, you can take a look at this video on. I also applauded the 2019 talk and this one is an introduction to natural language processing models. If you read a little bit about that, you probably heard by those you know, funny models called Elmo and Burt and Excel Net and all those and, well, this talk is actually ah, gentle hopefully introduction to what those models are and Thio building NLP models in general. And some people on the YouTube actually enjoy this video. Some commented that this was the best intro to an LP models they'd seen so well, you know, I'm humbled and, you know, I guess some of you will enjoy that as well. So look for that one. I also uploaded a video showing you how to deploy models on stage maker, and that's a white board video. I know some of you are big fans of that, so help yourself and I go through three scenarios. The 1st 1 is deploying a single model. The 2nd 1 is deploying different model variants on the same end point, and the last scenario is multi model and points where you load and unload models dynamically, a recent future. That is a great, great way to optimize cost for customers who have hundreds and maybe thousands off models. So look for that one. And last but not least, I uploaded a demo of an open source to local sage If I written by my friend Pavlos. So thanks again for this fabulous sage. If I is Ah see Ally to command line two Thio train and deploy models on sage maker. So if you don't even want to look at the sage maker as decay if you just need simple CLI commands Well, take a look at said if I Okay, well, that's it for the extra resources, Theo. I'm sure you're thinking Well, there is a demo. I'd like to you show you how to run the detector on to model on the deep learning, am I? So detector on is a very, very fancy computer vision model designed by Facebook and was released at the end of November. But I guess you know, with reinvent and holidays, you know, it might have ah might have gone unnoticed by some of you. So the demo is a little too long to be to be recorded here and of course, it's kind of visual, so it wouldn't work well with the audio version of the podcast. So I'm going to reference the video in the podcast description and you can look at it. And I'm basically running the tutorial on the deep learning am I. That requires a few a few modifications to the notebook, and I actually run some sample images and also a sample video through Detector to, and I think the result is quite spectacular, so you don't want to miss this. Okay, so there is a demo, but it's another the video, right? Well, that's the end of Episode five. I'll be back next week with a guest. I'm actually recording a few guests sessions this week. I'm sure that's gonna be really fun. So stay with me. And don't forget to subscribe to the channel to be notified of future episodes. Until then, keep rocking.

News
Extra resources
Demo