I’ll be referring to this research document in this paper. Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents + Supplemental content click the link to view it or
In this research paper they describe a system they created to test whether the concept of detecting illnesses of the chest with x-ray and Artificial intelligence was a viable solution. Their findings seem to determine that it is in fact viable and useful. Their research showed the AI was slightly less accurate than radiologists. I believe better training techniques would improve this situation.
The main issue the researchers seemed to have is getting enough good data to train the AI as well as possible. This can be improved by having the largest pool of data you can to start with to help train your AI program. Their experiment only had 2 sources. To improve the accuracy of the AI, I would suggest continually feeding new data into the system by having Radiologists interact with it via special user interfaces.
One way this could be done is by having the system send the x-ray from the machine into your software. Then have your software run it’s predictive analysis Algorithms and save the result. Then the radiologist can review result and the x-rays on their touch screen monitor. Your program can give them the tools they need in the user interface to confirm or change the systems analysis and record this data. Each time a doctor does this your AI software becomes better at predicting illness.
Another improvement I can suggest in this user interface for the radiologists, give them a way to circle over the x-ray, the location of the illness and label it. This over time would allow your Software to build a better understanding of the exact issues you want it to find and diagnose. This could also help your software run more efficiently. For example if one specific part of a lung gets cancer more often then other parts, you can have your AI start by scanning those sections first, then others after.
In the same line of thinking another improvement I would suggest is having the software highlight or circle locations in the x-ray where it thinks it has discovered something. This will further help the radiologists and other doctors by drawing their eyes to the immediate issue. Circling, highlighting etc. is all very easy to do without harming the original image.
I believe better ongoing collection of Data via user interfaces will add more value to your product by giving the users of your product the confidence that your system is designed to continually improve itself. Talk about smart software. Some of the ideas I list above would sound great in marketing materials.
Don’t run the AI on your machines. Create a “new age” system. Don’t spend millions a year on hosting servers for people to run your AI software on. Amazon AWS now offers GPU’s specifically for running AI algorithms.
This means you create a system where your software is hosted on a server like a website. The operator/radiologist logs into your system. Their x-ray machine can connect to your system or they have a way to feed the x-ray in. Through the user interface they choose to run your program. Your program fires up an AWS compute instance of their choice and runs the AI software. Their hospital pays for this instance not your company. These instances can run from 5 minutes to 5 years. If it only runs each time an x-ray needs analysis it may be like 10 minutes, which is a few pennies. After it runs it feeds the data back into your Server.
In the above server design you are storing your AI software and a database of past x-ray results which can be used to further train and calibrate your AI to make it more effective. All of the actual computing and running of your software is done on AWS servers. Your companies programmers can write scripts with something like Terraform and Packer, which takes the request from the User interface and fires up an AWS compute instance, or another platform.
Each of your customers(hospitals) is responsible for paying their AWS bills. The last thing you want is to have them running your AI on your Servers. Not only will that cost you much more due to owning the servers. It will also cost you more to maintain those servers and pay more employees. The last thing you want is to get a reputation for having a slow piece of software. If you get too many people trying to run your AI at once your system crashes or you get a huge $$$ Bill.
Creating a user interface to allow the doctors to choose what AWS instance to run your software on gives them more feeling of control. Do they want to spend $.10 or $1.00? Or let the hospital dictate? All that matters then is that your software runs properly.
After the radiologist runs your AI software on an AWS instance the data is then saved back to your sever and the radiologist can be sent a notification right to their phone. This flexibility in interfaces means they could use your app right from their phone if they are really busy. Or they could sit down at a desk or laptop. Or they could use a touch screen smart TV with internet access. Designed like this your system is beautiful and you can gather precious data much more easily all while giving doctors conveniences they will crave.
Imagine for a second this scenario. A person comes into the emergency room complaining of chest pain and trouble breathing. The emergency room nurse sets them up for an x-ray. That machine feeds into your system. Your system alerts a radiologist of the results of the AI software processing. The radiologist short on time whips out their tablet or phone, taps a few times, circles something and hits submit. Your system gets instant confirmation of whether it was correct or not and can improve.
Another added benefit of a system like this is more than one radiologist can have a look quickly and easily… right from their phones.
Having your system scripted to run the AI on AWS sounds complicate but it actually isn’t that hard. Basically what happens is your server takes the request from the user interface when the doctor choose which AWS instance they want it to run on. It then sends a request to a script that fires up a matching AWS compute instance and runs your code. When it completes it shuts down the AWS instance and returns the results to your system. Your customers/hospitals have an AWS account and your system has an interface for them to enter their information.
Building your system like this can help you gain more customers, more quickly by being able to offer a lower price for acquisition due to you not needing to charge for server resources.
AI algorithms have to change and they have to change often in a new system until you work out all of the minor details. Algorithms look like fancy math hieroglyphs… until they have to become code. I’d propose creating a user interface for your head AI algorithm person to be able to easily enter and change the algorithms.
The way this would work is your user interface gives the person a way to enter their algorithm. The system then takes those symbols and maps them to the matching character codes. It then needs to map those character codes to the underlying code. I am a really huge fan of writing code that writes code so I don’t have to write code. LOL https://akashicseer.com/web-development/how-to-create-100-symfony-5-doctrine-2-or-3-repositories/
I’ve written all kinds of mapping code. I’ve spent a lot of time writing code that writes SQL queries. I think many other things would be easier, honestly.
Such code would mean you have your programmer write the mapping code one time and your AI person doesn’t have to know anything about programming to watch their algorithms work.
You don’t have to stop at just software to help doctors quickly diagnose illnesses. You could use the precious data your server collects to create a training system for radiologists. Imagine you can create a game where a radiologist is shown an x-ray and has to guess the illness. When they submit they find out if they are correct or not and why or why not. You wouldn’t use this data to train your AI but it is what is known as value added.
Your system could also spot radiologists that are not so good at their job and help better train them.
Another thing I would suggest is to have your team brainstorm all the features they can think of. But do not plan on adding these to your product immediately. Add them over time. This will allow you to get your product to market faster. It will also make your customers feel like they are getting something for their money as they see constant changes and improvements. And the final reason is to use it as a weapon against your competitors. If you make a superior product you will be copied. But the survivor of this kind of competition war will be the constant innovator. If you have planned future innovations but do not speak of them. Then all your competition can do is copy you. When you are one step behind you get mud in your eyes on wet days as your competitor runs ahead of you. LOLOLOLOL
About my app
The app I am building contains a drawing/image editing user interface which lets users edit images or create drawings on any device. They can start it on a tablet, save, and start working on it from their desktop/laptop. I’ve spent a good bit of time learning about image processing, video processing, audio processing etc.
My app has a user reputation system because it has 0 administrators. The users of the social platform are the controllers. User A reports an item. The system asks x amount of user admins ( actual users not employees) to judge the content. Once they do, the system weighs their judgments and makes a decision. Admins have admin reputation. Users have users reputation. These reputations are built based on users actions and interactions with the system.
Starting out the system doesn’t really use an AI. It uses mostly basic machine learning algorithms and very basic math. In the future it will be moved up to using more AI. There are several reasons for this. First off, AI needs data and it takes time to gather that data. One enough data is collected I can finally start running AI on it. AI is only as effective as the data you give it. In Software engineering we have a saying for this “Garbage in equals garbage out”. So for a long while my system will use user judgments to gather data. Once it has enough data it can start guessing on it’s own what may or may not be bad content. It can then flag what it thinks is bad and have the user admins confirm it. This is the exact cycle I was describing above.