In this episode, I go through our latest announcements on Amazon Forecast, Amazon Personalize, Amazon SageMaker Ground Truth, AWS Deep Learning AMIs, and Amazon Elastic Inference.
⭐️⭐️⭐️ Don't forget to subscribe to be notified of future episodes ⭐️⭐️⭐️
Additional resources mentioned in the podcast:
This podcast is also available in video at https://youtu.be/H8VjSI6Czy8
For more content:
speaker 0: 0:00
Hi, everybody. This is Julian. Formidable. Yes. Welcome to Episode 14 Off my podcast. I hope you're all safe. Stay home if you have to And use that time to watch all my videos and read all my bloke ghosts on. Don't forget to subscribe to my channel to be notified your future videos. In this episode, I'm going to go through Annapolis news. Cover some of the latest announcements on machine learning. We're going to talk about Amazon personalize and elastic inference and a few more things. Okay, let's get started. Eyes is Amazon. Forecast is now available in three new regions Sydney, Mumbai and Frankfurt. Forecasters. High level service that lets you easily build forecasting models for time series data. If you've never looked at it, I recommended. It's really, really super easy to use. Just upload your data in C S V format to us three and then in just a few clicks or just a few FBI calls, you can build a forecasting models for supply chain time series or inventory time. Siri's metrics anything that Time series can be easily used with forecasts of nice service, and now customers in those three regions can use it locally. That's always nice. The next news item is actually a combination of service is so now you can use Amazon personalized with Amazon. Pinpoint. I don't think I've ever talked about Pinpoint. So maybe I should explain. First Pinpoint is a service that lets you manage engagement Campaign's over a variety of channels email, SMS, voice notifications, et cetera, so you can push messages thio to your audience, and you can measure engagement data. Okay, so the service has been available for a while, and now you can actually seamlessly used personalized with it. So this means is you can reference on Amazon personalized campaign directly in pinpoint. So let's say you want to send personalized notifications thio to your audience with pinpoint. While you can easily do that now because you can build a personalization model with Amazon personalized and plug it into your Amazon pinpoint campaigns, this is actually based on a solution. Eso solution is, um, an architecture that has bean ah, design and validated by our solution architect and, um, and you can easily deploy it using ah confirmation template. Okay, so you can you can view the deployment guide here and you can launch it directly, and this cloud for mission template is going to create all of those resources inside or eight of us account. So it's going to provision pinpoint resources and personalized resources. Kidneys, etcetera, etcetera. Solutions are really a super, super easy and fast way to deploy. Um um, an architecture that solving specific problem. Okay. And you don't have to mess with anything. Just review the template. Run it. And all this will be ready in minutes so you can read all about it. Um, as a matter of fact, we have quite a few solutions out there. You can you can find them on the on the progress website, and ah, and here I filtered. Ah, for just the way I am l solutions. So we can see you can see the point of all and in Seymour, So some of them are based on the 100% days on it. Over the service is some of them are also based on partner solutions. So you can see from detection for machine learning. Uh, she learning for telcos, et cetera, et cetera. So I would really, really recommend that you look at those because, um, you might find that one of those is ah is already really, really close to a problem that we're trying to solve. And ah, you can just go and test that. And then, of course, you can adapt it to your own needs and customize it, Eve, if you have to. But at least you won't be starting from a from a blank page. Okay, so solutions are nice. And of course, I will put the link to all of this in in the description. Okay, so that's it for personalized and pinpoint and all those nice A I am l Solutions. Now, let's talk about Sage Maker. There's always something happening on stage maker. And this week we have new features on sage maker Ground Truth. So ground truth is a capability offstage maker that makes it easy to annotate data sets at scale. Okay. And I've covered ground truth in great detail in previous videos. So please go in and check out that demo. The new feature here is that you can now launch multi label, um, tagging for images and texts. Okay. So, previously, if you had to, uh, to annotate data set with different labels, you had to go through multiple rounds to, ah, to add the different labels. So now you can actually defying the list off labels for images or your text, and you can get all that job done in the single ground. Truth job. Okay, so that's particular. That's a great simplification for customers. And if you've never seen ground truth, it lives in the stage maker counsel. So you can create leveling jobs and labeling data sets and work forces, which could be private, third party or mechanical turkey to scale really high. And this is an example of a segmentation job on. And I think this is the one I actually used in my my demo where I go through a bunch of images and you, Ah, semantics segmentation on guitar players. Okay, so if you want to know how to do this, please go and check out those those other videos. So now will this new feature when you create a new job, okay, you'll see in the task type that you can create a multi label image classification jobs and, uh, and multi label text jobs as well. Okay, so there you go. Very nice simplification. I really rely ground truth. I think it's a It's a really cool service. It Selves a really, really hard problem, which is Hey, I've got thousands of 10 of thousands of images Thio to annotate. How do I get that done? Well grown ground truth is how Okay and, uh, and keeps improving. So that's great. Okay, let's talk about frameworks. Eso what we have here? Well, I think last week we announced updated deep running containers with the new frameworks. And of course not. To be outdone, we have the deep learning air buys with the same updated frameworks. So latest tensorflow 2.1 lettuce pytorch and latest mxnet. Okay, So again, if you've never looked at the deep learning AM eyes and you keep baking your own am I cz um well, please do yourself a favor. Please check the deep running. Advise they're free to use. As you can see, we maintain them constantly. We update them constantly, and we just save you the hassle off building those air Mayes and installing those deep learning frameworks. Weird. Sure. Not always Super easy to install. And the NVIDIA drivers for GPU instances, a sexual act central. Okay, So deep running am Eyes and deep running containers, which I've already discussed are your friends. Give them a try. I think there are huge time saver and well, we see this last bit here. Ah ha! Elastic inference with pytorch. Well, this is actually the next future. I want to talk about elastic inference was launched at reinvent 2018 and I keep talking about it because I think it's it's it's a fantastic service. So let me explain once again because I know some of you are not familiar with it. Why it's so important. So some models andi, are too heavy. Um, too complicated to be deployed on CPU instances. Sure, you can deploy them, but they're really slow, right? Prediction is going to be slow. So what do you do? Well, off course, you deploy to GPU instances, okay? And everything is super fast and and fantastic. But let's be honest. GPU instances are a little more expensive. That CPU extends instances and sometimes you don't get the most bang for your buck because you use them by default. And if you monitor gp users, you realize, well, maybe I'm just keeping that GPU b Z 10% or 20% of the time, so things are nice and fast. But I'm paying for a full fledged instance, and I'm not keeping it super busy because my model is not that complicated. Not that crazy. Or maybe I just don't send enough traffic to it. So it's not a great situation, right? You have to choose between performance and cost optimization. Well, that's exactly the problem. That elastic inference is Ah is solving elastic inference lets you use fractional GPU acceleration so you can pick between three sizes medium, large and X large, and you get a certain number of teraflops for each size. And now you can attach this elastic inference accelerator to any E C two instance so it could be your easy to instance, the one you manage. It could be it could be a sage maker managed instance. You have the choice. Okay. And now you'll be able to find the right combination off CPU and GPU acceleration for your application, and you can run your benchmarks and you can find the sweetest spot for cost in performance. Okay, so elastic inference is a really, really great feature. You can save up to 70 or 80% compared to using full fledged GPU instances. So, please, please, if you're using GPU instances today and you've never, ever looked at elastic inference, you've never bench among them. Please give them a try. You might just be able to go back to your boss or your CFO and say, Well, we just save 70% on our GPU workloads. Okay, Prediction. Workloads. So that's Ah, that could be a very nice number. So give it a try. This was available at launched for Tensorflow and mxnet. Okay, we actually added extra AP eyes to those framework so that you can use them on your E c two instances. And of course, this was integrated in sage maker. And now, um, you can do the same with pytorch. Okay. Pytorch is super popular. It is a really cool library. So Well, there was no reason for private or chooses to be left out of the elastic inference party. Okay, so now you can use elastic inference with pytorch and save tons of money and buy more beer. Right? So how cool is that? All right, that's it for this episode. I hope you learned a few things until next time. Please stay safe and keep rocking