All right, so hopefully everyone can see my screen. Nice to meet you all today. My name is Jason Mayes. I am a developer advocate for Tensorflow JS here at Google. And today I'm going to be talking to you about machine learning in the browser and beyond in JavaScript, essentially. And before we get started, I just want to give a quick overview about my background, I'm actually from the web engineering background, and you might be wondering why Web engineer here a machine learning kind of a session. And essentially over the years, I ended up working my way, combining JavaScript with machine learning, and that is why I'm here to talk to you about that today. Because one thing you take away from this slide, I spent 16 years playing with JavaScript, and for that reason, I've got a few things to say about that in terms of why it's actually interesting to consider for a use case in machine learning.
And on that subject, the first thing to talk about is that we get to use JavaScript to run in pretty much any location JavaScript can run in the browser, on the server side, desktop, mobile and even Internet of things. And if we dive into each one of those areas, you can see many of the technologies you might be familiar with already or the popular web browsers on the left hand side, we've got node JS set aside native and native applications and mobile electronic desktop applications and even Raspberry Pi by node JS over there for Internet of Things. And JavaScript is the only language that can run in all of these environments without additional add ons and plug ins and all this kind of stuff. So for that reason alone, it's very powerful. However, with Tensorflow JS can run, you can retrain by transfer learning or you can write your own machine learning models completely from scratch. If you so desire, just like you can do currently in Python.
And of course, that means you can dream up anything you may wish from augmented reality, sound recognition, sentiment analysis and much, much more, which we'll see of today. Now, the first way you can use Tensorflow JS, depending on your background and experience with both machine learning and JavaScript.
The first one I want to talk about the pre-train models be the really easy JavaScript classes that even if you have very limited knowledge of machine learning, you get to use these very quickly and effectively and got many of these are available to use out of the box for open source such as object detection, body segmentation, pose estimation, and all the other things you may be familiar with here, even that has been converted to run in the browser. And that's pretty cool. So let's dive into some of these models and how we can actually code them to use them in the Web browser and beyond. So the first thing we're going to look at is object recognition, which is actually using Coco SD behind the scenes and has been pre trained on object classes. And you can see I've made a little demo here in the right, allows me to get the banding box information back and render it to the browser to highlight the objects in the scene. And we display the class and the confidence is there as shown. So that's what would be the code on how I created this so we can see how this is actually put together.
So first of with the HTML, nothing too fancy here, just bread and butter HTML. First we're going to do it input some stylesheets. Second, we're going to have a section for our demos. And initially this is going to be invisible because we want to wait for the machine learning model to load. And then when the machine model is ready, we will make the demos visible so you can then click on them and interact with them. Now, within this section, we can have a bunch of images that you can click on, and when you click on them, you're going to get the results for the objects in that image. And these are contained within a DIV element so that we can actually read the other things to that area later on, as you'll see, and mean all we need to do is import some JavaScript libraries. The first one is Tensorflow JS itself. The second one is the Coco SSD model model that we want to use, which is one of our models that I just spoke about. And then the third thing is the script for JavaScript. We're going to use both of these things. So that's all we need for the HTML. So let's dive into the JavaScript, the script.js on this page. So the first one we're going to do is we're going to grab a reference to the demo section in the HTML and that's simply using get element by ID of bread and butter dom manipulation and then going to have a global variable model, which is just going to be undefined for the time being until we've loaded.
So the next thing we need to do, of course, is load the model now because we imported Coco SSD with our script tag on the previous slides, this Coco SSD variable will now be available to us and we simply call it CocoSSD.load. And because this is an asynchronous operation, we use the Ven keword. So when it's ready or when a function of our choosing and pass us the loaded model. This loaded model, we can now assign to our global value model. So we know that the model is loaded and we can use it in our other code and we can then remove this invisible class in the CSS such that the demos now render correctly. So you'll see in the demo later on it goes from a greyed-out state into a nice, colourful state. So you know when to click on things. Next, we're going to get all the images that we want to be clickable and of course, I gave them a class of classified unclick in the HTML, and you're going to grab all the elements that have that class. So you're going to get an array of images here called image containers.
And we're going to iterate for all of these image containers and essentially we're going to get the child image node and add an event listener for a click event and associate a function called handle click when that element is clicked. So, if you're going into the handle click function, you can see all this does here is when it executes, an event is passed. And if the model has not already loaded —- because it does take a couple of seconds -- then we're going to return straightaway just in case someone tried to click on something before we are ready. Otherwise we're going to go ahead and call the model, .detectfunction. And this, of course, takes an image-like object as parameter. In this case, we're passing the event target, which is the image that was clicked. And again, this is an asynchronous operation. So we wait for that to finish and then it will pass the predictions to a function of our choosing. In this case, I've named it as handle.predictions. So let's dive into the handle predictions function. And essentially you can see here just a JSON object pass back, which you can just look if you wanted to see what it looks like, but essentially just an array of objects for things we think we found in the image along with their confidence, values and so on and so forth.
And essentially we can then iterate for objects as such using a [[]], and we can start creating some new HTML elements so that we can start rendering out the details to the screen. So all I'm doing here is I'm creating a paragraph tag and I'm sending the text of this paragraph tag to contain the class and the rounded score for the accuracy. And I then simply add some style to this paragraph. So it appears at the bottom left of the image. And then I simply add a development so I can actually have banded box and just going to be lots of dashed lines so we can have the body sharing nicely. And I'm just going to put this at the position at which the classification recognize the objects. I'm just setting the whip top and left. And with that we've got about the box. So all I need to do now is add these two elements I've created in memory to the actual DOM, the actual the Web page itself, and we're basically good to go. So finally, with some success, this is up to you how you want to start things, of course. But here I got some nice transitions and I've got some bodies attached and so on, so forth to make it look pretty.
And if we go over to the left, you can see what this is going to look like. So I'm going to stop this screen and go over to my other window at one moment to screen again.
And hopefully I can now find my way. Good stuff.
So now we can see essentially if we switch to my new window, I think I still see myself.
Perfect gift you can see here that's basically running live in my browser right now. And if I click on any of these images, you can see we get the instant gratification of the object that can detect in the scene. And that is here even though we've got multiple objects. But it will come back nicely and the object are different types and so on and so forth. So now you can see how I could use this very easily to detect if your dog at home is trying to treat or something like this, it would not be too hard to take something like this and turn it into a smartphone. Now, what's even better here is if I enable my actual webcam right now, you can see me in my bedroom talking to you and we can do this with webcam imagery as well. So if we just call the function many times per seconds, we can actually get a live update of everything, as you see here. And that is how fast this actually is as well, which is running in JavaScript, in the Web browser and at a very high five seconds. So JavaScript is very, very good at the display of information and computer graphics and all this kind of stuff. And that's basically what it's been designed for. So is very easy to make these visual kind of feedback systems, as you see here.
So you're not going to stop presenting this and go back to the slides one moment screen again and then we go back to the slides if you can present my slides again for me.
So next up, we have phase mesh, which is just three megabytes in size and essentially allows you to recognize four hundred sixty eight facial landmarks if it's just another premade model like the one you saw before. But this one is aimed around identifying parts of the face. And you can see if it can be for many interesting things, including augmented reality, such as image on the right hand side, that this is actually my multiphase, which is part of the L'Oreal group. And this lady on the right is actually not wearing any lipstick red using face mesh to understand where her lips are and then using webbed shaders to then overlay pretty graphics on a face. So it looks like it's actually on her face and looking very realistic. It can change the shape of a lipstick and so on and so forth. And which is really great in today's world where have stuck at home. They can't touch the product and maybe don't want to try and products other people have touched. So in that case, this is very useful and can help you still shop in these times. And I picked up a demo for this as well to show you running light because it is really cool. So if I just go ahead and swap my screen again, there's going to be a bit of screen shopping, screen swapping going on today. One moment this. So now you can see I'm actually doing this live in the Web browser and if you can present my screen again was moderating.
Can someone present my screen perfect. So here you can see my face is being detected in real time on the left hand side of the machine that is going on on the left. But because this is JavaScript, there's very rich libraries for free decrepitude, as I mentioned before. So on the right hand side, you can see I'm rendering a 3D point out of my face in real time. At the same time is doing with the machine learning or in real time in the browser. I can see how I'm getting like twenty five frames per second, even though I'm coming through right now, which is using up my processor quite a bit. But this is running on the CPE. You can see at the top here, which is running in Web assembly wasm and that means it's executing on the CPU of a device in the web browser. I can actually switch this to WGL and when I can actually get even faster performance by leveraging the graphics card of the device. And if I do that when I'm not streaming, I can get like forty five fifty frames per second for this particular model, which is pretty cool. So things run really fast, super lightweight and allows you to do whatever you might want to dream up. So I encourage people to get creative with these models and figure out how they might want to apply them for their use cases like going to stop this and go back to the slides again.
One moment during my screen. Back to the slides here.
Next up, we have body segmentation, this allows you to distinguish twenty four body areas across multiple bodies in real time. This is called our Body Picks model. And you can see here on the right hand side how this works in action. And you can see multiple bodies being detected at the same time. And all the different colors represent different parts of the body. And even better, we can get an estimate of the POWs as well. But that's the blue line you see in the image on the right hand side. Now, with a bit of creativity, we can actually emulate many of the superpowers. We can probably see in the movies by now and try to share a few examples I've created to illustrate this. The first one is invisibility. This is running live in the Web browser. And I made this in just one day and I'm able to remove myself in real time from the webcam in the browser. But notice, as I get on the web on the bed, how the bed still deforms. So if it's not some cheap trick where I'm just replacing the background of a static image, this background is being updated in real time. And I'm already moving the body pixels and I'm calculating what the background is over time. So this can take some cool effects. But what about lasers? We can once again talk about WGL.
We can have people from our community making things like this, which allows him to use Web shaders to understand how to combine this with face mesh, to shoot lasers from his mouth and eyes, just like you can in Ironman or something like this. And I thought, well, this is pretty cool, but let's just go one step further. And some of you may have seen recently on social media, I created this kind of teleportation demo that allows me to segment myself in real time. I then transmit myself using Web RTC with real time communication over the Internet to some remote location and then using Web XOL, which is what makes reality. Essentially, I can then place myself in a remote room and the person is watching me in that remote room can walk up to me that can hear me from the right angle and it can move around me and so on and so forth. So now instead of having a video conference where you're stuck in a rectangular box, you can actually be physically present almost in 3D and had to be a massive fitting and more personal feeling of meeting someone, especially the times when it's like a home and it's very hard to be out in large groups and so on and so forth. And of course, other questions can be made beyond this as well. And here's another one I created for clothing size estimation.
I don't know about you, but I'm terrible at what clothing size? I am out in the wild. I always look at my sizes and of course the body changes over time as well. So here in the 15 seconds, I can use politics to figure out what my size is, getting my measurements for chest and height, using my height, I can figure out my chest and my inner leg and waist and those measurements, which allows me to select automatically on the Web page if I'm a small, medium or large, that kind of stuff. So now I can buy stuff without having to return all the time and save time and money because of that. And of course, we've seen a community also do some great things too. And this guy from Paris and France, from our community has managed to combine it also with web and web gels. So if he can scan any magazine and bring that person from the magazine into his living room. So maybe you're interested in fashion or something like this and you can now go and inspect those clothing's in more detail in a way that is more meaningful to you. And that is he's using his mobile phone here. He's actually using it to two and a half year old Android device to do this. So it doesn't require the latest hardware to do this, but it's all running in the Web browser on that device, which is pretty, pretty cool.
Now, the second way you might want to use JavaScript based machine learning is Trendspotting. Once you outgrow our primate models, maybe they don't quite work with the data that you have available to you. You might want to retrain those models to work with data that's more meaningful to you. And of course, if you are a machine learning expert, which I'm sure many of you are giving the audience today, you can do the same stuff you do in Python, then you can do it all by code if you wish. However, today I want to focus on two examples that allow you to do it much faster than that for more simply use cases. And the first one, of course, is teach of a machine, which is great for prototyping because it allows you to do everything in the Web browser. At no point other than delivering the Web page is a service. We're going to do the training in the Web browser and we're going to do the inference in the Web browser. So the best way to explain it to go and use it. So let's switch my screen and show you how this works. So once again, I'm going to stop my presentation here and flip over to teach a machine.
There we go.
So when are you going to teach your machine with Google dot com? You are presenting a page, something like this right now to machine can allow you to train to recognize images, audio or specific poses. And these three are just the starting point. I'm sure more will be coming soon as it says here, but for today we're going to go with image. So you click on Image and by default you have two glasses, but you can add more classes, if you like, at the bottom left here. Now, I'm going to go ahead and name be seven more meaningful. So the first thing I'm going to recognize is myself. And the second thing I'm going to recognize is a deck of playing cards I've got in my room right now. So all we need to do is click on a webcam and allow access or webcam and live preview of what's coming from the camera here.
And we can use this to take samples of the objects you're interested in. So I'm just going to sit here and move my head around and get a few samples of one moment. And we get maybe a few more images of my face in various positions, and of course, if you're doing this more properly, you do this with more variety, you have more training data. But for the purpose of today's demo, that's all we need to do exactly the same thing. I'm going to hold up this deck of cards instead, so I'm going to get roughly the same number of images, about thirty eight. Forty one. That's close enough. And then I'm simply going to click train model. Now, what's going to happen here is that the tension is going to retrain the top there of this model of a teaching machine using which is actually a classifier built upon upon on it. And we can then repurpose the speed of understanding that one has already learned to then classify these objects here. And you can see in under 15 seconds, it's already trained. We're done. And you can see on the right hand side a live preview of an action. As you can see here, I predict the output is Jason, which is correct, that is showing on the live preview. And if I hold up my deck of cards, you can see the card straight away. Jason cards, Jason cards. And look how fast that was in under three minutes. We've managed to make something, but you could use for prototype to demonstrate something and you're good to go. So this is good enough for your knee. If you can simply click on export model here, you can download and you can download the various files containing the model weights and such, and you can use those on your website or CDN and then use that on any website you like. Using the Capullo to really not too hard to.
And of course, if you try to make more robust models, you would need more training data to avoid any biases and so on and so forth. And for that, as already mentioned, Google Cloud also now is cheaper for those kinds of situations.
So if I stop presenting this and go back to the slides screen.
So the next thing is, Carol, and of course, the previous president has already gone into this in much detail, so it's going to give a very high level overview here. But essentially, as already mentioned, cloud or HTML allows you to train custom models in the cloud. However, it can export to tends to vote yes, which is pretty nice, too. That's why I just like to touch on this very quickly. You can see here how someone is trying to recognize different flowers. They just updated all their photos of flowers to cloud storage and then you can put that fire out or to email, as you saw before, events at your various options. If you want to have a higher accuracy of your predictions and set your various type of parameters if you want to, and then you can continue and wait for it to train, as you just saw, you then have an option to download at the end there you download that zip bundle and unzip it onto your server and store it somewhere else even. And basically you can then use those files in your own website and you might be wondering, well, how hard is it to use vote trained from cloud if that is so easy, if it's just one slide, even easier by the time I showed you before. And let me just quickly walk you through this.
All we need to do is import to libraries at the top in HTML services to digest library and the cloud library. The third line here, you just new image I want to classify as an image of a daisy by a graph from the Internet. This could be anything which could be the webcam, could be something else if you wanted to. But for today's demonstration, I'm just going to use this image here and then this is get to JavaScript. Just three lines of code or we need to do is wait for the model to load, so to load image classification and basically just passed the model, the adjacent file that you would have downloaded from the previous step and posted on your website somewhere. You just tell it where there is and wait for that to load. And you usually wake up because this is asynchronous. It takes some time to do so. And when it's ready, it will then be assigned to this model constant. On the left hand, we have a reference to our data image, which is just above the just doing element by ID. And you can see about the idea of Fazi up here and now that's stored as an image and mean all we need to do is call model gasify with the image we want to classify and await the results to come back. And again, this is an asynchronous operation, so this might take a couple of milliseconds and then you get the predictions as adjacent object, which you can then iterate through just like before and print out or do something useful.
So super easy to use. But this can be used for production level quality applications because you can upload gigabytes of data to cloud potential and the models are very robust for the purposes of image classification and much, much more. Then the third way to use that, of course, is to write your own code completely from scratch. And for those of you who are used to doing that, I'm not going to go into that right now because that's beyond a 40 minute talk. However, I'm going to concentrate on the super powers and performance benefits you can get by considering using JavaScript over maybe Python or something like this. And essentially to is better. We need to talk about how things have actually been architected. Essentially, we have two APIs. The first one is a high level API called the API. This is basically similar to if you're familiar with Python. And in fact, if you use Kiraz in the past, you'll be very familiar with Vélez. We can have the same function definitions and so on and so forth. The only difference is that you're using JavaScript over Python, but the function calls we pretty much the same Downbelow. We have the Ops API, which is the more mathematical API that allows you to do be linear algebra and that kind of fun stuff. So if you do want to get down and dirty of the code on the lower level to make your own models or do things from scratch, you can do that with our Ops API as well, just like the original densified.
So bringing this all together, you can see how our primate models at the top there are sitting on top of the large API Vélez API sits on top of the core or API depending what you want to call it. And this can then talk to different environments such as the client side, now the client side, and referring to things like the web browser. We chat, we native those kinds of environments and each one of those environments understands how to talk to different factions. And by backhands I simply mean hardware device. So the CPU, the processor WGL allows you to run on the graphics card in JavaScript, in the Web browser and WASM or Web assembly is how we get better performance on the CPU. So depending what's available to you, depending what device you're running on, you can choose which one of these you want to execute on. How is that CPU always be there by a slow form of execution and you probably want to be using web or WASM for real world usage. And most devices actually support because I think WASM has over ninety five percent support across devices and web goes like eighty six or something like that. So very widespread support and that can run on an Invidia graphics card or ID and graphics card.
So that's already pretty cool. You can run on many more devices. Now the flip side of this story is that we've got the Niger story and this is the server side stuff, just like you would be doing Python already in the server side note, this is our equivalent for that in JavaScript. And the as she talks to the same TFW, CPU and TFTP back in that written, I believe, in C++ or something like this, and that means it has the same performance as Python for model improvements because we can also leverage the same good acceleration and so forth and so on and so forth behind the scenes as both Python and JavaScript are basically talking to the same backend, which is written in C++. And so with that, if you are doing all your e-mail development in Python, there are some ways you can have a more easier journey to the web via node so you can take a model that you've saved and load it by a large API. And if you want to take something from Python or the equivalent of the lower level is to take the fenceposts saved model and you can also load fact in without any conversion and use it on the server side environment in notes for free. And the advantage of doing that, as we'll see in just a second. Now, the third thing you might want to do is convert a safe model from Python and use it in the Web browser.
Maybe you want to reach more people or get more people to try out or something like this. As a researcher, that could be useful to you with a computer. You can basically do that. And there are some caveats with that. We don't have all the support that the core has right now. Obviously, we're playing catch up with a team with only two or three years old, has been around for much longer. So we are open source and we welcome contributions to those supports, if you want to add more on the side as well. But to a lot of models do come out about any issues. And if you're using some more exotic operators that you'll find that is an issue. And of course, you will mention which operation is not supported and you can even choose to implement that or use a different opt to get around that. Cool. So let's talk about performance then. This is for Mobile Net VTE, and you can see the running of the graphics card in Python. It takes seven point nine eight seconds and running with a graphics card in Niger is eight point eight one. So it's within the margin of error, pretty much depending which way the wind was blowing on the server on the day that we recorded this result. And it's pretty much the same. But the real beauty comes when you're doing a lot of pre and post processing.
Now, often, as you're probably aware in machine learning, you have to manipulate the data coming from sensors or so on and so forth to get it into a format that the machine model can digest. And this can be very non-trivial at times. So in JavaScript, if you do the pre and post processing in nature, you can benefit from the Just-In-Time compiler that JavaScript has because we compile at runtime and that can actually lead to a two time performance based, as you can see here, for Hugging Face. That is still that model. They converted to use notes and what the inference was the same. The whole pipeline was two times faster because of that pre and post processing, having been converted to Nogent as well. So, yeah, go play with some things. I do encourage maybe doing Immortal's in Python you're used to, but go play with it for the pre and post processing. And then not only do you get the region side of the web, but you also get some of these performance boosts for free as well. So it could be a nice marriage of technologies like that. So let's talk about some of the super powers. And if we just focus on the client side for a second, SEPETYS tend to be addressed in the Web browser. Then we get the following very useful benefits that are hard or impossible to achieve in a service environment that the first one is privacy executing entirely in the Web browser.
That means any data coming from the sensors stays on the client's machine. At no point is the imagery or sound or whatever data you're grabbing is never sent to a third party server. And in today's world of privacy is top of mind. That's really important, especially if you're making some kind of health care application, as we saw before, or even something legal requirements maybe, or in Europe or DPR rules in this kind of stuff. There's many reasons you want to execute things on the client side, executing on the client side at least a point. Number two, lower latency because there's no round trip time to the server and back again, especially on mobile devices for latency involved. That could be quite high. You could be looking at one hundred milliseconds or more just to talk to the server and get a result. That means you can have higher frames per second if you classify this stuff in real time directly at the source of sensors and the Web browser has direct access to all of our senses. That point lower cost. If you're not running a server, then of course you need to keep the server running all the time to do the inference, which means no hiring of CPU's and CPU's and lots of ram running 24/7. And if you're having tens of thousands of users per month, that can lead to some serious cost savings, maybe tens of thousands of dollars per month. If you have a popular site full of point interactivity.
As I mentioned before, JavaScript has been from day one all about being the kind of sharing of information and graphics and all this kind of stuff. It's only got better since then, so we have very mature libraries for Fredi, graphics charting, data visualization, and I encourage you to check some libraries out as they're much easier to use and other things I've tried in other languages and get points at reaching scale. Anyone in the world can click on that Web link and get access to the Web page. That means you get the machine running for free. It just work out of the box. Now, the same is not true if you're try to do this over the server side, an intensified python or something, because first of all, you need to install Linux. Secondly, you need to go. Then you have to install the crew to drive drivers, which is often quite tricky to do when you have to go, including the GitHub repository. And then you have to read the reading of the repository to understand how to use the code. And if all of that works out well for you and maybe, just maybe you'll be able to use that and get it running. So as you can see, it's a much higher barrier to entry going down the other path as an end user at least. So if you're a researcher, I could give you many more eyes on your cutting edge research. If you can provide your model to be executed in the browser, you can maybe have tens of thousands of people using your model to give you feedback and find any bugs or biases which can be fixed faster.
And if just a few people from your team are only the ones using it. And of course, my native region scale, we mentioned on the GPU we can support eighty four percent of devices via webcast and that means we can run on a MacBook Pro with a AMD graphics card. And I believe Antiflu on the server side can only support Invidia as to the best of my knowledge, using the Cuda drive. So we don't care what you have as long as it can run, but we're good to go. So the flip side of this is that on the server side, there's also some benefits here. This applies both to know Jass and Python, whichever one you're using here. So here you can see that you can use potentially save money without compassion. And which is great if you're trying to integrate with a Web team. As I mentioned before, a lot of what developers out there probably are more familiar with. And if you're working in Python, the machine learning, which is a really nice way to collaborate with other teams. Second point, you can run larger models down on the client side. Often you'll be limited by the graphics memory of the client's device as to what size model you can load and execute on that device.
Now, in most cases, it's not too much of a problem because many of the models are just megabytes in size. But if you were trying to push down a larger model down the pipeline, then it's probably better to run it on the client side, on the set side. And in that case, you can do that. Using Fairpoint allows you to code in one language. Now, if you're already using JavaScript and currently sixty seven percent of developers out there use JavaScript as that kind of production language, according to a survey of 20 20, then that means you need to hire less people and you can read your code and so on and so forth. So just make maintainability just a little bit easier. And then the fourth point node has a very large NPM ecosystem. So just like the JavaScript side, with as many libraries you can now use for doing many different things in the fifth point, which already is the performance, you get the Just-In-Time compiler boost, but you don't get in other languages, which means you can get up to a two time performance boost for your machine learning models for the pre and post processing. So with that, I'm going to add some resources. It is one slide you want to bookmark or screengrab today and share with your friends. Is this one here essentially? First of all, our website densify to go back to then or the basics of absence of progress and a good introduction.
Second point, our models are available on board such models there. And I just spoke in about two or three models today. There's actually many, many more that we've opensource and you can use round the box. And I encourage you to go play with them after the show. We are completely open source, just like the original sense of it. So if you are willing to contribute, check out our GitHub code and there's much more README file as far as well for more information on the various models and so on and so forth to see if you have more technical questions. We have a Google group that you can join the public and our team tries to monetarists as much as possible. And then we've also got Copan English examples, but are super easy to fall and use. And what these are are examples showing how to use some of our more popular premade models with the webcam, with images and other data. So you can just poke that and have a starting point network and then you can just take it and change it to do what you need very, very quickly. If you're looking for a recommended reading, I'd recommend that depending defending with JavaScript that you can see back here from Manning Productions. And if you use made of WTF JS discount code, you can actually get thirty percent of that book and any book. I think that's a great code to use. And then if you are interested, please come join the community.
We do have a hash tag made to actually be on Twitter or LinkedIn. You can see even more projects that people around the world are making, not just myself and my team, but people around the world, just like you starting out of JavaScript or new to machine learning. Or both, and they're still able to make some really great things to do, check out the inspiration and inspiration. I want to read you one more. This guy from Tokyo, Japan, has actually managed to make a hip hop video using fenceposts. And the key thing here is that he's actually just a dancer, but he's managed to leverage densified. Yes. To do this very nicely. And my main point for saying this is that machine learning really now is for everyone. It's not just limited to the academics and researchers. We can now start to see creative musicians, artists and much, much more starting to take their first steps of machine learning to. And I'm really excited to see how to I to be part of that picture, given how wide JavaScript is used in the community out there for many different things. And I love to see what you guys make as well in the future and do share with us if you make anything. And of course, I'm available on Twitter and again for any further questions if you don't have any right now. So thank you very much for listening and I'll hand it back to the presenters.
Awesome. Thank you so much. That was a wonderful presentation. Lots of amazing demos. I really love that. The conclusion which is, is we are just democratizing the machine learning or part of the of the revolution as we came in and revolutionized the whole whole industry. As we see right now, colleges, we are seeing the same thing happen again.
Yeah, and she'd like a second wave right now. And people who are jumping onto the job market right now are a very exciting time to have a lot of influence as well. And I hope to see more collaboration on the state between the Python communities and the JavaScript ones. As I mentioned, we can run the same model format, you know, and it will be it would be lovely to see more of the academic folk exposing their work to the Web environment so people can start using it in ways it never even dreamed up. I mean, people are doing teleportation a lot of things right now and using it for politics. But things like this can be made. And with a bit of creativity, new things can come to life. So I'm really excited to see where that can go in the future.
Yeah, that's amazing. One of the questions that we got on chat was why would you want to choose to use less than an hour? And I think you've covered some of those things. But if you want to go into Malkin's.
Yeah, I don't want to force any language biases on people to use whatever you have. Of course, the benefits I spoke about today, such as the ones in the browser, we can get privacy or lower latency or more cost savings. These are the main reasons you might want to consider deploying to the Web if those things are important to you and your use case. And that might be a good reason. Or maybe you're looking to get a little performance boost for the pre and post processing, maybe no consideration for that kind of stuff. But honestly, you can do stuff in any of these languages and that means what you're familiar with. But if you are going to integrate with other teams and maybe consider exporting the same model and allowing it to be digested by people who might want to use JavaScript or something like that, yeah, of course.
Yeah. That is like the great way to do it. You can not only build your new models with the you can also use existing models that are built by other team members. Yeah. Awesome. Thank you so much. Orthodontia next. Presenter. Thank you.