Great presentation guys! From my readings, seems like one advantage of LiteRT over MediaPipe is having finer grain controls over what models are used, including the ability to quantize them. One use case I am thinking about for adding on-device AI capability to our android app is to be able to support various hardware configurations with more and less memory for example. Do you think it would be possible to switch to a model at runtime that would perform best on the hardware of that particular device? Would that require a ton of work to implement in your opinion?
Watch more Spotlight Weeks → goo.gle/SpotlightWeeks
Great presentation guys! From my readings, seems like one advantage of LiteRT over MediaPipe is having finer grain controls over what models are used, including the ability to quantize them. One use case I am thinking about for adding on-device AI capability to our android app is to be able to support various hardware configurations with more and less memory for example. Do you think it would be possible to switch to a model at runtime that would perform best on the hardware of that particular device? Would that require a ton of work to implement in your opinion?
Nice conversation 👍