You can stop improving accuracy of your awesome AI product and still gain traction. I spent my initial time at Blue Yonder doubting my own skills - making cool data plots with internal tools, but when something broke, I spent hours figuring out how I messed up. My fault? I was new. Just simplify - provide intuitive interfaces with accessible documentation - and a lot of "me" will use your product.
You might find yourself interested if you have ever found a user (ab)using a feature incorrectly because they didn’t understand its intent, if you’ve ever tried using a new cool tech (ahem, AWS) but gave up after struggling just to get a toy example to work, if you’ve ever looked at a fellow developer’s PR and wondered whether it’s the same product you’re working on. This talk is for everyone who works on a piece of software that is not just meant for personal consumption. ML/AI tools especially have the reputation of being opaque, now add to that poor user-experience, and you will have users who curse your product everyday (not you, pandas). I intend to channel all my rants into technical or philosophical tips. No prior knowledge is expected, though we will look briefly into some useful tools like Sphinx, doctests and OpenAPI. I would like to discuss how some low-hanging fruits, if overlooked, can impede a product from being embraced by a wider audience, and how some of these ideas can improve development velocity and increase the „bus factor“ of the team. I would share a few tips on how to address the structural hindrances to maintaining user-friendly interface, particularly documentation (yes, documentation is a part of your user interface). These challenges include how to convince your team lead to invest team resources in this effort, how to keep docs up-to-date systematically, how to attempt to quantify the cost of cryptic or inaccessible documentation, etc. I will not refrain from ranting and will also share some anecdotes when complex interfaces costed me my mental peace (and sometimes my sanity).