A UCL project to solve the problem of tedious digital pathology labeling

In this article, I am going to talk about one of my dearest and best projects that I have worked on, my undergraduate thesis at University College London. My thesis was about using unsupervised machine learning to automate the process of breaking down huge Whole Slide Images and labeling them without supervision. This proved to be a much more difficult task than I originally thought. I honestly can't believe that if you search for “unsupervised digital pathology” on google, the first result is my paper!

Cancer is the second leading cause of death worldwide, responsible for around 15% of deaths[1]…

Coding a small Flask server vs Coding a WebDNN browser model

Photo by Krzysztof Kowalik on Unsplash

Developing machine learning models is all fun and good, but after developing them you will probably be looking into deploying them into an app to use them. The question is should you put them on the client-side (which is probably the mobile) or should you put the model on a server, send the data to the server and get the results back? In this article, I am going to be discussing the trade-offs and what those 2 methodologies entail.

Server-side models

Server-side machine learning models are the most widely-used ML systems simply because the architecture is more simple to implement. For example…

Maximizing reward is the secret sauce to artificial intelligence

Photo by Robert Anasch on Unsplash

From the authors of “Attention is all you need”, this paper proposes an intriguing hypothesis that incentivizing AI agents with reward is enough to achieve General Artificial Intelligence. The paper is more of a philosophical paper rather than one with a machine learning model and code. I guess this gives us an indication of why Deep Mind has been pouring all of its effort and money into optimizing games with AI agents, they believe that developing the strongest reward-seeking agents is the key to Artificial Intelligence. In this article, we are going to understand why they believe so.

Developing skills

Each ability…

From convolutions to self-attention to dense layers and much more

Photo by Glenn Carstens-Peters on Unsplash

Computer vision has been evolving quite rapidly for some time now and I think it's time we take a step back and investigate how the architectures have changed over time, and see the pros and cons of each architecture. Computer vision is a huge field and is fundamentally challenging since images to computers are just matrices of numbers. The main challenge between these evolving networks is what are the most optimal operations and procedures to apply to these matrices to turn these numbers into quantifiable useful features such as colors, textures, shades, and much more.

1. Convolutional Neural Networks (CNNs)

CNNs are the most popular…

A historical overview

Photo by Hugo Jehanne on Unsplash

The Palestinian-Israeli conflict has been going on since the early 1900s; when the mostly-Arab region was part of the Ottoman Empire and, starting in 1917, a ‘mandate’ run by the British Empire. Hundreds of thousands of Jews were moving into the area, as part of a movement called Zionism among mostly European Jews to escape persecution and establish their own state in their ancestral homeland. Later, large numbers of Middle Eastern Jews also moved to Israel, either to escape anti-Semitic violence or because they were forcibly expelled.

In Nov. 1917 the British Government stated its support for a Jewish homeland…

A collection of the most widely used and useful ML libraries to include in your ML project

Photo by Waldemar Brandt on Unsplash

Libraries are one of the main enabling tools that allow us to quickly implement networks and develop a successful application. ML libraries keep changing at a fast pace and I thought it would be a good idea to give an overview of the best new ones. There are multiple popular libraries that have been around for quite a while and I don’t intend on reviewing them since you are probably familiar with them; those are:

  • Pytorch & Tensorflow/Keras
  • Pandas & Matplotlib
  • Numpy & Sci-kit learn

Those are by far the top libraries and there is no need to review them…

A comparison between the latest versions of PyTorch (1.8) and Tensorflow (2.5)

Photo by Vanesa Giaconi on Unsplash

Tensorflow/Keras & Pytorch are by far the 2 most popular major machine learning libraries. Tensorflow is maintained and released by Google while Pytorch is maintained and released by Facebook. In this article, I want to compare them in terms of:

  • What's new in the latest released versions
  • Which one to use & why (based on 2 years of doing ML projects)

Tensorflow 2.x:

There are multiple changes between Tensorflow 1 and Tensorflow 2.x, I am going to try to pinpoint the most important ones.

The first one is the release of Tensorflow.js. With web applications being more and more dominant, the need…

Speeding up Transformers training speed by up to 7x with the magic of Fourier transforms

Photo by Amr Taha™ on Unsplash

By replacing the attention sublayer with linear transformations, we are able to reduce the complexity and memory footprint of the Transformer architecture. We show that FNet offers an excellent compromise between speed, memory footprint, and accuracy, achieving 92% of the accuracy of BERT in a common classification transfer learning setup on the GLUE benchmark (Wang et al., 2018), but training seven times as fast on GPUs and twice as fast on TPUs

Source: Arxiv

Recent ML papers have been targeted at fiddling with transformer layers. It is quite interesting to see what works and what doesn’t (even though we probably…

Going back to MLPs for image processing, simple but effective (with competitive results)

Photo by AbsolutVision on Unsplash

Image processing is one of the most interesting subareas in machine learning. It started with Multi-layer perceptions, then convolutions, then self-attention (transformers), and now this paper brings us back to MLPs. If you are like me, your first question would be how would an MLP achieve almost the same results as transformers and CNNs? This is the question that we will be answering in this article. The new proposed “MLP-Mixer” achieves very close results to the SOTA models trained on tons of data with almost 3x the speed. This was also an interesting metric in the paper (images/core/sec).

The proposed…

A comprehensive overview of the book, what I learned after reading it, and whether you should get it or not

Photo by Kourosh Qaffari on Unsplash

Hands-on Machine Learning with Scikit-learn, Tensorflow & Keras is probably one of the most popular ML books (if not the most). I have just finished it recently and I enjoyed it so much that I thought it would be worth writing a book review about it. If you aren’t familiar with this book, it's an O’Reilly production book (which means the content quality is awesome), and it's meant to target beginners in machine learning. It includes everything: theory, code, exercises, and questions. But, the best thing about it are the smooth explanations with code examples.

The book content

One of the most impressive…

Mostafa Ibrahim

Programmer. University College London Computer Science Graduate. ARM Full Stack Web Dev. Passionate about Machine Learning in Healthcare.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store