/profile.jpeg

Wassim Seifeddine

Installing BLAST on Macos and downloading the database

BLAST (Basic Local Alignment Search Tool) is a widely used tool for comparing nucleotide or protein sequences against databases. If you’re working on macOS and need to use any of its tools, this guide will help you install BLAST, download necessary databases, and run BLAST searches efficiently.

Installing

Manually

Installing BLAST manually gives you control over the installation process and allows you to install the latest version directly from NCBI.

Intro

In this blog I will share my thoughts on machine learning, although some other topics might pop up. I will try to be as concise as possible and avoid all the unnecessary jargon *cough* academia *cough*. I hope you not only enjoy the content but also learn something new.

Disclaimers:

  1. I am not an expert on all of the topics I will be discussing. I am just a person trying to learn and share my thoughts.
  2. Posts are always WIP, I will try to keep them up to date but I might miss some things.
  3. There will be mistakes in the posts. I will fix them as soon as I notice them, If you notice any mistakes please let me know by email.
  4. If you have ideas of topics, please share them with me. I’m always keen to learn new things :)

Neural network acceleration

Why accelerate

Well, we have neural networks, they are awesome, they work but there’s a problem. THEY ARE HUGE. We scaled from a hundred of millions of parameters to hundred of BILLIONS. This problem makes using neural networks in real life quite hard as you normally don’t have this huge computational capabilities to run them anywhere.

Neural networks have proven to be a very valuable tool in scenarios where the transformation from inputs to outputs is unknown. Suppose you are asked to write an algorithm to classify an image if it’s a cat or a dog, how would you do that ? Well first you might ask yourself, “what makes an image a cat?”. Answering this question is incredibly hard because a vast amount of cases to cover in order to have your algorithm generalizable. This is where neural networks shine; Given an input $ x_{i} $ with its respective label $ y_{i}$ you can use a neural network model with a set of parameters $\theta$ denoted by $ M(\theta) $ to approximate $y_{i} = f(x_{i})$. Normally with enough data you can get a very good estimate of $f$. However, this comes at a huge cost, training and running these large networks is expensive in terms of time and memory because of the huge amount of parameters that you need to learn to get the best approximation, this makes these models hard to use in real life scenarios. Also, the recent trend of models getting bigger and bigger in order to get better performance is making this problem even harder.

[ARCHIVED] Serving BERT Model with Pytorch using TorchServe

Finally

So finally Pytorch is getting a decent (?) production serving capabilities. TorchServe was introduced a couple of days ago along with other interesting things

I'm not in ANY way expert on putting pytorch in production environment. What I've been using is Flask. I have never tried ONNX or torchscript before to judge on.

So TorchServce was announced as a “industrial-grade path to deploying PyTorch models for inference at scale”. In this tutorial we will try to load a finetuned BERT model.

[ARCHIVED] Learning High School Physics with Tensorflow

## Introduction Learning to convert from Angular displacement to linear displacement was nothing fancy in highschool, but TEACHING a machine learning model how to do the translation is on a whole new level . Just kidding, it's also nothing fancy.

Task

What we want to do is learn the mapping between the angular rotation (complete circle) of a circular object given its diameter to the distance covered on a linear path

[ARCHIVED] Using Tensorflow Gradient Descent to minimize functions

You probably heard the hype around Gradient Descent & how it’s used to optimize Deep Learning Models. But you may never knew that it can be used to minimize ANY function you know.

In this blog post we’re going to see how to use Gradient Descent to optimize a simple quadratic function

we all know from highschool.

$ax^2 + bx + c$

with
a = 1
b = -40
c = 400

we get the following formula