Finally I feel seen in Config, thanks Ryan for give visibility to all UX Writers and Content Designers of the world. We need more of this at Config 2025!! P.S. We also are Designers 🔥
We opt-ed out from content training but can we be 100% sure that just using the AI features will not leak any of our data/content/concepts to train the AI?
Okay, I'm gonna have a bunch of questions. How do you think we could train AI to embed accessibility *first* into a content challenge? (P.S., great presentation! Fun and smart.)
Great question, and I'll admit that I have no idea. My gut says that the onus will continue to be on us for the foreseeable future-like having the tools help us create the labels, tooltips, etc. that we can then add to our products. There's probably also an opportunity for tools like Figma to help us do that in situ. With our new Code Connect feature, I could see us making it possible to pipe ARIA labels and other accessibility bits directly into code via components. That's my long-winded way of saying "it's probably still on us to do that", at least for now :)
You may be totally right! Only time will tell, but I'll stay semi-optimistic for now. (There's plenty else to be bummed about right now, so... I'm picking my emotional battles 🫠)
Designers are problem solvers and human advocates. This will just make our projects much more productive and allow us to spend more time testing and iterating.
Hey folks! If you have any questions or feedback, fire away. I'll check back here periodically to follow up.
And thanks so much for watching 🙏
Loved it! Inspiring and genuine. It's refreshing to see real examples on CD + AI. Thank you!
Thanks, I'll be back later in the day!
Finally I feel seen in Config, thanks Ryan for give visibility to all UX Writers and Content Designers of the world. We need more of this at Config 2025!! P.S. We also are Designers 🔥
this was so relatable for me, thanks so much for sharing!
We opt-ed out from content training but can we be 100% sure that just using the AI features will not leak any of our data/content/concepts to train the AI?
Okay, I'm gonna have a bunch of questions. How do you think we could train AI to embed accessibility *first* into a content challenge? (P.S., great presentation! Fun and smart.)
Great question, and I'll admit that I have no idea.
My gut says that the onus will continue to be on us for the foreseeable future-like having the tools help us create the labels, tooltips, etc. that we can then add to our products. There's probably also an opportunity for tools like Figma to help us do that in situ. With our new Code Connect feature, I could see us making it possible to pipe ARIA labels and other accessibility bits directly into code via components.
That's my long-winded way of saying "it's probably still on us to do that", at least for now :)
End of our jobs. Thank you
You may be totally right! Only time will tell, but I'll stay semi-optimistic for now. (There's plenty else to be bummed about right now, so... I'm picking my emotional battles 🫠)
It will take jobs. Because now the secretary can just build the app for the company.
Designers are problem solvers and human advocates. This will just make our projects much more productive and allow us to spend more time testing and iterating.