How to Build a Multilingual AI Voice Assistant in FlutterFlow (OpenAI Text-To-Speech App Tutorial)
Вставка
- Опубліковано 30 чер 2024
- In this video we walk through how to build a multilingual AI voice assistant app in under 35 minutes with FlutterFlow.
Cloneable Project Link
app.flutterflow.io/project/sp...
Open API/Swagger File
github.com/openai/openai-open...
OpenAI Documentation
platform.openai.com/docs/intr...
Timestamps:
0:00 - Introduction
0:13 - Demo
0:52 - Creating FlutterFlow Project
2:37 - Setting Up Theme Settings
3:32 - Designing App Page
5:34 - Recording Buttons
8:44 - Waveform
11:14 - Widget Animations
12:55 - Reviewing Cloneable Project
12:09 - Voice-To-Text Custom Actions
16:27 - Chat Completion API Call
19:40 - Stop Recording Actions
20:18 - Adding Multiple Languages
22:34 - Text-To-Voice Custom Action
27:35 - Adding Remaining Actions
28:47 - Timer Widget
31:37 - Final Demo of Working App
Ready to try FlutterFlow for yourself? Start building your app today with a free trial 👉 www.flutterflow.com
Follow us on Twitter 👉 / flutterflow
--------------
FlutterFlow is a low-code builder for native apps, bringing design and development into one tool. With drag-and-drop functionality, you can build pixel-perfect UIs and easily connect your app to live data via Firebase or APIs. Plus, you can add advanced features like push notifications, payments, animations, and more. Whether you build your own custom widgets or write custom code, FlutterFlow makes it easy to bring your app ideas to life. - Наука та технологія
Amazing. I was waiting for this Flutterflow tutorial since Whisper came out 😄
Thank you!!
great tutorial, love the pacing of this! please do more
well done ff i was waiting for this video for a long time, now i will go the paid version , good work flutterflow team
Can't I run it as test for myself for free?
@@armankarambakhsh4456 what is the poblem that you had
Amazing work thank you !!
This is fire, thank you. People could create generational wealth with this skillset
Yes! Indeed
Finally!!
Hey @FlutterFlow in the action flow editor there is the: "start and stop audio recording" actions why did you use a custom action instead of these built in actions?
nice one when would the image to text own be done ?
Cool project will this work with the GPT4 new super API…the preview one? Assume the chat completions code needs changing?? Thx
Great tutorial 👍
Is it safe to add the API key as app state if I want to publish to the web?
Can you to do a video about the bluetooth ( ble )connection using a esp32 ? I think that in the tutorial have a mistake , because i did step by step , but it never work😢
I would Like to see a video using Assistants and OCR
Tell me you made that audio up, haha... hilarious.
This is brilliant! I'm a complete noob at all this so I have a bunch of Questions if anyone can help please?
1) I assume I'd be able to deploy this app (or one very much like it) to mobile if I were on the Standard plan via APK?
2) This pubspec dependancy seems to eliminate the need for using Whisper or Google's S2T API - is there a list of other great dependencies
3) Why does the Standard plan only have the same number of API endpoints as the free plan?! I see that you can make API calls with custom actions but I can't quite figure them out, does anyone have any good guidance on this?
min 13:15 everyone gangsta till you NEED CUSTOM CODE
@flutterflow how do we do this for other ai llms? Like huggingface llama etc? It would be cool to see other llms used in the app space with ff integration.
Just one quick question, how can I add a function to stop the text to speech ?
Hi, could you help to explain why SpeechToText throws an error "msg: not-allowed" on FlutterFlow Test mode but can run normally on Run mode?
I get the following error: Refused to load media from 'data:audio/mp3;base64,' because it violates the following Content Security Policy directive: "media-src *". Note that '*' matches only URLs with network schemes ('http', 'https', 'ws', 'wss'), or URLs whose scheme matches `self`'s scheme. The scheme 'data:' must be added explicitly. Is there any fix for that
I can not deploy my app with your action functions to app store, I'm getting this message: "the app status of version has changed to invalid binary". What should I do?
How can we connect this to a google agenda so it can make appointments and send a notification to a client
N8n or make automations
Just a heads up that this no longer runs due to issues with speech_to_text
Thanks, I was wondering why it wouldn't work.
How do you fix it?
I get error 401 failure when I tested it can you help me pls
Informative video, but does the app only work on the web platform?
In this tutorial, yes. The custom action used to play the audio back utilizes web-specific features. This could be easily adjusted, however, by using a package on pub.dev!
@@FlutterFlow Excellent Tutorial as always! Appreciate all the great content. Can you elaborate on your response about how we can easily adjust the audio playback so the custom function will work on iOS and Android deployments? I see the "//Play the audio audioElement.play();" in the function lines but am not sure what to replace them with and which changes to make. Like what can we use from pub.dev and how would we best insert that into the custom function. I am new to Dart and Flutterflow but these tutorials are a lifesaver. Thanks!
Following this thread, as I also would like to get this working via iOS and Android.
Me too
@@armandoortiz3613hey Armando, did you ever get a fix to this problem?
Is there a reason you used a custom package to transcribe the audio instead of the OpenAI transcription service? Could you update this to use that instead? thx
The custom package is free with minimal latency (doesn't require an API call), hence the use in this video. The whisper API could be used simply by adding an API call to the project and passing in a saved recording of the user's audio message.
Can some provide link or explain, how to make it for app version, not just web version
Can you do an updated version since this one doesn't work anymore?
hey @ff i have been trying to clone the app but no way what is the problem please
?
please hot to note and play and download this audio? how to add options maleVoice and famleVoice
@FlutterFlow it doesnt response to my question it just says something random how can i fix this ?
I get the same behavior. it's as if the transcribed audio is not being passed to the api via the [prompt] parameter.
when running in test mode in browser I see the below errors:
Any help is appreciated. ✌😎
dart_sdk.js:50705 registerExtension() from dart:developer is only supported in build/run/test environments where the developer event method hooks have been set by package:dwds v11.1.0 or higher.
dart_sdk.js:29145 Starting text recording
dart_sdk.js:29145 Error!: SpeechRecognitionError msg: not-allowed, permanent: false
dart_sdk.js:29145 Stopping text recording...
I use android stoudio in my flutter project can you tell me the name of this program?
is there any way for it to work on the app itself without publishing it ?
Won't work for iOS (neither WebApp in iPhone) :( . The Voice-to-text fails so bad. What can I do?
Where these retuned audios get stored? In app, if so how storage manages? What if I want to store it in database?
The API returns these are direct MP3s, so you'd need to write an additional custom function or adjust the one used in the video to upload the bytes to Firebase (or another storage bucket) in order to use them later!
@@FlutterFlow got it
must we have chatgpt 4 ApiKey ?
i faced that error while testing the API . how to solve it ASAP please
{
"error": {
"message": "The model `gpt-4-1106-preview` does not exist or you do not have access to it.",
"type": "invalid_request_error",
"param": null,
"code": "model_not_found"
}
}
wow
if someone can get a working template of this they will make millions. I have so many use cases but haven't been able to get mic working on mobile apps.
Does anyone have a fix for the issue of it not working anymore?
can anyone please educate me why three is no FLutterFlow tutoria on the web to show how to connect a flutterflow app to a web admin backend so that an admin can atleast perform basic CRUD to populate the app we have built in flutter flow ?.. In a real life application, Most app will need a backend to see users orders, upload products etc
Try duplicating your FlutterFlow project to create a web-only application. This will allow you to connect the project to the same backend (Firebase, Supabase, etc) but deploy the app on a unique URL that is different than the main app you've published. This would allow you to have two interfaces that connect to the same backend database but could be hosted at app.[your-link].com and admin.[your-link].com.
@@FlutterFlow Thank you for this response. This means i can equally make a web admin panel with futterflow and publish it for web to talk to the same Firebase db as my app to populate/perform CRUD on my app
@@FlutterFlowI have asked this question several times also, Im not sure why every one who makes a Flutter flow tutorial only focus on the App client side and do not show how to connect it to a backend..You need to show this in one of your videos pls.,. If we can not have an admin panel to populate the app, then do we go into firebase everytime to change things manually ?
@@lawrence1679 our team covers these types of videos on the livestream. There are a couple of solid videos on this to get you started:
ua-cam.com/video/zSHK3GyCpvw/v-deo.html
ua-cam.com/video/ec8coFykrWA/v-deo.html
does this work if you were to export to mobile given fetchSpeechAndPlay is using dart.html?
@FlutterFlow we have problem with "fetchSpeechAndPlay".
i am getting an error model not found? please help
so many custom actions, i gonna generate that with AI, Or i gonna first call Buildship and run all the functions i want
I tried to test the API call and it gave me a 404 (failure). The message said that I don't have access to GPT 4. Is this mean that I have to subscribe to Chat GPT Plus? Are there any other way?
Same problem, I am only using chatgpt 3.5 turbo s apikey
change the model@@taha-fd6cr
Yes, I got that error. The workaround is: API Call -> body -> replace line 8 with: "model": "gpt-3.5-turbo-0301",
that is the right answer bro chat gpt 3.5
I found a major different between using my iphone and my laptop. Majority of the time it doesn't work from the browser on my iphone but it does on my laptop. Has anyone else experienced this issue? Also, has anyone taken this project and used an AI assistant with passing an assistant ID.
I also noticed issues when attempting to run on iPhone (via browser). The response message playback seems to be delayed.
Do this using gemini api
Has anyone successfully compiled this feature to ISO and Android?
Because I'm struggling with dart.html.
Same here , please reply if u have somehow solve the problem.
Following this thread, as I also would like to get this working via iOS and Android.
@@internetisbeautifull I've asked the developer to rewrite the code without dart.html.
When we test it on a MacBook (virtual device), everything is okay.
But when I compile it for the App Store, I can't hear the answers. We haven't solved this issue yet.
@@AdrianMcMillian I've asked the developer to rewrite the code without dart.html.
When we test it on a MacBook (virtual device), everything is okay.
But when I compile it for the App Store, I can't hear the answers. We haven't solved this issue yet.
So ChatGPT replaced the dart:html and uses “Just_audio” and a pub dependency version 0.9.36. But I can’t for the life of me figure out how to link it together.
Watched the first 6 minutes. Is this a Text-To-Speech tutorial or a Speech-To-Text tutorial? Asking as i saw you put an input microphone in the container....
Do you need to be a premium ChatGPT user for this?
You'll need to add billing information to OpenAI in order to obtain an API key. This project does NOT require the premium version of ChatGPT.
Can I build this application with free account because I'm newbie
algun tutorial en español :/
I'm getting this error when publishing: "Target of URI doesn't exist: '/backend/backend.dart'. Try creating the file referenced by the URI, or try using a URI for a file that does exist." for each of the custom code sections
you copied everything from code, including imports, but in fact you shoudl copy only where is written with comments. Or just delete this import of backend