Categories
medicine Software Development

A proposed system to help Physicians quickly detect Illness via x-ray

I’ll be referring to this research document in this paper. Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents + Supplemental content click the link to view it or
https://www.researchgate.net/publication/344604144_Comparison_of_Chest_Radiograph_Interpretations_by_Artificial_Intelligence_Algorithm_vs_Radiology_Residents_Supplemental_content

In this research paper they describe a system they created to test whether the concept of detecting illnesses of the chest with x-ray and Artificial intelligence was a viable solution. Their findings seem to determine that it is in fact viable and useful. Their research showed the AI was slightly less accurate than radiologists. I believe better training techniques would improve this situation.

The main issue the researchers seemed to have is getting enough good data to train the AI as well as possible. This can be improved by having the largest pool of data you can to start with to help train your AI program. Their experiment only had 2 sources. To improve the accuracy of the AI, I would suggest continually feeding new data into the system by having Radiologists interact with it via special user interfaces.

One way this could be done is by having the system send the x-ray from the machine into your software. Then have your software run it’s predictive analysis Algorithms and save the result. Then the radiologist can review result and the x-rays on their touch screen monitor. Your program can give them the tools they need in the user interface to confirm or change the systems analysis and record this data. Each time a doctor does this your AI software becomes better at predicting illness.

Another improvement I can suggest in this user interface for the radiologists, give them a way to circle over the x-ray, the location of the illness and label it. This over time would allow your Software to build a better understanding of the exact issues you want it to find and diagnose. This could also help your software run more efficiently. For example if one specific part of a lung gets cancer more often then other parts, you can have your AI start by scanning those sections first, then others after.

In the same line of thinking another improvement I would suggest is having the software highlight or circle locations in the x-ray where it thinks it has discovered something. This will further help the radiologists and other doctors by drawing their eyes to the immediate issue. Circling, highlighting etc. is all very easy to do without harming the original image.

I believe better ongoing collection of Data via user interfaces will add more value to your product by giving the users of your product the confidence that your system is designed to continually improve itself. Talk about smart software. Some of the ideas I list above would sound great in marketing materials.

 

Don’t run the AI on your machines. Create a “new age” system. Don’t spend millions a year on hosting servers for people to run your AI software on. Amazon AWS now offers GPU’s specifically for running AI algorithms.

This means you create a system where your software is hosted on a server like a website. The operator/radiologist logs into your system. Their x-ray machine can connect to your system or they have a way to feed the x-ray in. Through the user interface they choose to run your program. Your program fires up an AWS compute instance of their choice and runs the AI software. Their hospital pays for this instance not your company. These instances can run from 5 minutes to 5 years. If it only runs each time an x-ray needs analysis it may be like 10 minutes, which is a few pennies. After it runs it feeds the data back into your Server.

In the above server design you are storing your AI software and a database of past x-ray results which can be used to further train and calibrate your AI to make it more effective. All of the actual computing and running of your software is done on AWS servers. Your companies programmers can write scripts with something like Terraform and Packer, which takes the request from the User interface and fires up an AWS compute instance, or another platform.

Each of your customers(hospitals) is responsible for paying their AWS bills. The last thing you want is to have them running your AI on your Servers. Not only will that cost you much more due to owning the servers. It will also cost you more to maintain those servers and pay more employees. The last thing you want is to get a reputation for having a slow piece of software. If you get too many people trying to run your AI at once your system crashes or you get a huge $$$ Bill.

Creating a user interface to allow the doctors to choose what AWS instance to run your software on gives them more feeling of control. Do they want to spend $.10 or $1.00? Or let the hospital dictate? All that matters then is that your software runs properly.

After the radiologist runs your AI software on an AWS instance the data is then saved back to your sever and the radiologist can be sent a notification right to their phone. This flexibility in interfaces means they could use your app right from their phone if they are really busy. Or they could sit down at a desk or laptop. Or they could use a touch screen smart TV with internet access. Designed like this your system is beautiful and you can gather precious data much more easily all while giving doctors conveniences they will crave.

Imagine for a second this scenario. A person comes into the emergency room complaining of chest pain and trouble breathing. The emergency room nurse sets them up for an x-ray. That machine feeds into your system. Your system alerts a radiologist of the results of the AI software processing. The radiologist short on time whips out their tablet or phone, taps a few times, circles something and hits submit. Your system gets instant confirmation of whether it was correct or not and can improve.

Another added benefit of a system like this is more than one radiologist can have a look quickly and easily… right from their phones.

Having your system scripted to run the AI on AWS sounds complicate but it actually isn’t that hard. Basically what happens is your server takes the request from the user interface when the doctor choose which AWS instance they want it to run on. It then sends a request to a script that fires up a matching AWS compute instance and runs your code. When it completes it shuts down the AWS instance and returns the results to your system. Your customers/hospitals have an AWS account and your system has an interface for them to enter their information.

Building your system like this can help you gain more customers, more quickly by being able to offer a lower price for acquisition due to you not needing to charge for server resources.

AI algorithms have to change and they have to change often in a new system until you work out all of the minor details. Algorithms look like fancy math hieroglyphs… until they have to become code. I’d propose creating a user interface for your head AI algorithm person to be able to easily enter and change the algorithms.

The way this would work is your user interface gives the person a way to enter their algorithm. The system then takes those symbols and maps them to the matching character codes. It then needs to map those character codes to the underlying code. I am a really huge fan of writing code that writes code so I don’t have to write code. LOL https://akashicseer.com/web-development/how-to-create-100-symfony-5-doctrine-2-or-3-repositories/

I’ve written all kinds of mapping code. I’ve spent a lot of time writing code that writes SQL queries. I think many other things would be easier, honestly.

Such code would mean you have your programmer write the mapping code one time and your AI person doesn’t have to know anything about programming to watch their algorithms work.

You don’t have to stop at just software to help doctors quickly diagnose illnesses. You could use the precious data your server collects to create a training system for radiologists. Imagine you can create a game where a radiologist is shown an x-ray and has to guess the illness. When they submit they find out if they are correct or not and why or why not. You wouldn’t use this data to train your AI but it is what is known as value added.

Your system could also spot radiologists that are not so good at their job and help better train them.

Another thing I would suggest is to have your team brainstorm all the features they can think of. But do not plan on adding these to your product immediately. Add them over time. This will allow you to get your product to market faster. It will also make your customers feel like they are getting something for their money as they see constant changes and improvements. And the final reason is to use it as a weapon against your competitors. If you make a superior product you will be copied. But the survivor of this kind of competition war will be the constant innovator. If you have planned future innovations but do not speak of them. Then all your competition can do is copy you. When you are one step behind you get mud in your eyes on wet days as your competitor runs ahead of you. LOLOLOLOL

About my app

The app I am currently working on right now works with the technology you would need to achieve the interfaces I describe. Basically I use javascript with a canvas element to create an interactive drawing tool. This technology can run on all devices running newer browsers. This means you do not have to create a web user interface, then another for an apple app, and another for a windows app and another for google play etc. These days I build it once and that is all. I am a lazy programmer. LOL

The app I am building contains a drawing/image editing user interface which lets users edit images or create drawings on any device. They can start it on a tablet, save, and start working on it from their desktop/laptop. I’ve spent a good bit of time learning about image processing, video processing, audio processing etc.

My app has a user reputation system because it has 0 administrators. The users of the social platform are the controllers. User A reports an item. The system asks x amount of user admins ( actual users not employees) to judge the content. Once they do, the system weighs their judgments and makes a decision. Admins have admin reputation. Users have users reputation. These reputations are built based on users actions and interactions with the system.

Starting out the system doesn’t really use an AI. It uses mostly basic machine learning algorithms and very basic math. In the future it will be moved up to using more AI. There are several reasons for this. First off, AI needs data and it takes time to gather that data. One enough data is collected I can finally start running AI on it. AI is only as effective as the data you give it. In Software engineering we have a saying for this “Garbage in equals garbage out”. So for a long while my system will use user judgments to gather data. Once it has enough data it can start guessing on it’s own what may or may not be bad content. It can then flag what it thinks is bad and have the user admins confirm it. This is the exact cycle I was describing above.

Categories
medicine Resources

Articles resources and information about intelligence

Research study explains why highly intelligent people prefer to be alone

What is the purpose of Theta brain waves – people with ADHD have increase Theta wave production. But what are theta waves?

Categories
medicine Science

Articles about genetics

Gene swapping – an article about how all organisms naturally get genes from other non related organisms such as bacteria. Very interesting article.

Expression of multiple horizontally acquired genes is a hallmark of both vertebrate and invertebrate genomes – research paper

Categories
medicine Resources

interesting genetic and medical links and resources

Genetic Associations between Voltage-Gated Calcium Channels and Psychiatric Disorders

Allergic tendencies are associated with larger gray matter volumes

Bad News for the Highly Intelligent 
Genetic glitch at the root of allergies revealed
The Importance of Spatial Reasoning in Early Childhood Mathematics – I find this interesting because of the above article about allergic tendencies mentions that those who suffer severe allergies have increased spatial reasoning which makes you better at mathematics. I have super severe allergies and math has always been a boring repetitive exercise. I have lots of Math books. I learn and understand math immediately. Algebra was boring, physics was just applied algebra same with Geometry. I only do math when I need to and often just review what I need to get the problem solved.

Theory on autism

Relationships between Depression and High Intellectual Potential

New study reveals why some people are more creative than others – I read about creativity because I am insanely creative, to the point I can’t stop daydreaming. When I am not doing something super complex that requires complex thinking I go into imagination land and the world around me disappears and I make huge messes as I solve problems like how does gravity work, or how to manipulate matter and create new atoms. I really want to learn about math, chemistry and physics, my brain is insanely creative. I have a severe form of ADHD and one of the highest IQ’s ever measured.

Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents