<![CDATA[Subek Sharma]]>https://blog.subeksharma.com.npRSS for NodeMon, 17 Jun 2024 00:15:24 GMT60<![CDATA[Predicting Pneumonia from Chest X-rays using Deep Learning, Computer Vision, and Flask]]>https://blog.subeksharma.com.np/predicting-pneumonia-from-chest-x-rays-using-deep-learning-computer-vision-and-flaskhttps://blog.subeksharma.com.np/predicting-pneumonia-from-chest-x-rays-using-deep-learning-computer-vision-and-flaskSat, 20 Jan 2024 07:44:28 GMT<![CDATA[<h2 id="heading-introduction"><strong>Introduction:</strong></h2><p>Pneumonia is a respiratory infection that inflames the air sacs in one or both lungs. It can be caused by various pathogens, including bacteria, viruses, and fungi, and is characterized by symptoms such as cough, fever, and difficulty breathing. Timely and accurate diagnosis is crucial for effective treatment, and advancements in deep learning and computer vision are now playing a pivotal role in automating this process.</p><h2 id="heading-understanding-pneumonia"><strong>Understanding Pneumonia:</strong></h2><p>Pneumonia affects millions worldwide, with a significant impact on public health. Identifying pneumonia in its early stages through medical imaging, such as chest X-rays, is critical for prompt intervention and improved patient outcomes.</p><h2 id="heading-deep-learning-and-computer-vision"><strong>Deep Learning and Computer Vision:</strong></h2><p>Deep learning, a subset of machine learning, involves training neural networks to learn patterns and features from data. In medical imaging, computer vision techniques combined with deep learning models enable the automated analysis of images, allowing for efficient disease detection.</p><h2 id="heading-transfer-learning-with-vgg16"><strong>Transfer Learning with VGG16:</strong></h2><p>Transfer learning leverages pre-trained models on large datasets for a new, related task. VGG16, a widely used convolutional neural network architecture, has proven effective in image classification tasks. By utilizing the pre-trained weights of VGG16, we can harness its knowledge of general image features for our pneumonia prediction model.</p><pre><code class="lang-python"><span class="hljs-keyword">from</span> tensorflow.keras.applications <span class="hljs-keyword">import</span> VGG16<span class="hljs-keyword">from</span> tensorflow.keras <span class="hljs-keyword">import</span> models, layers<span class="hljs-comment"># Load VGG16 base model</span>base_model = VGG16(weights=<span class="hljs-string">'imagenet'</span>, include_top=<span class="hljs-literal">False</span>, input_shape=(h, w, <span class="hljs-number">3</span>))</code></pre><h2 id="heading-hyperparameter-tuning"><strong>Hyperparameter Tuning:</strong></h2><p>Fine-tuning the hyperparameters is essential for optimizing the performance of our model. Parameters such as learning rate, dropout rate, and the choice of activation functions can significantly impact the model's ability to generalize well.</p><pre><code class="lang-python">model = models.Sequential()model.add(base_model)model.add(layers.GlobalAveragePooling2D())model.add(layers.Dense(<span class="hljs-number">256</span>, activation=<span class="hljs-string">'relu'</span>))model.add(layers.Dropout(<span class="hljs-number">0.5</span>))model.add(layers.Dense(<span class="hljs-number">1</span>, activation=<span class="hljs-string">'sigmoid'</span>))<span class="hljs-comment"># Compile the model with Adam optimizer and binary crossentropy loss</span>model.compile(optimizer=<span class="hljs-string">'adam'</span>, loss=<span class="hljs-string">'binary_crossentropy'</span>, metrics=[<span class="hljs-string">'accuracy'</span>])</code></pre><h2 id="heading-training-and-saving-the-model"><strong>Training and Saving the Model:</strong></h2><p>Training the model involves exposing it to labelled data (chest X-ray images in our case) and adjusting the weights based on the model's predictions. The trained model is then saved for future use.</p><pre><code class="lang-python"><span class="hljs-comment"># Train the model with the prepared data generators</span>history = model.fit(train_generator, epochs=<span class="hljs-number">50</span>, validation_data=validation_generator, callbacks=callbacks)<span class="hljs-comment"># Save the trained model</span>model.save(<span class="hljs-string">'pneumonia_prediction_model.h5'</span>)</code></pre><h2 id="heading-setting-up-flask-for-prediction"><strong>Setting Up Flask for Prediction:</strong></h2><p>Flask, a web framework for Python, provides an excellent platform to deploy our pneumonia prediction model. By creating an endpoint for predictions, we enable users to upload chest X-ray images and receive predictions in real-time.</p><pre><code class="lang-python"><span class="hljs-keyword">from</span> flask <span class="hljs-keyword">import</span> Flask, render_template, request, jsonify<span class="hljs-keyword">import</span> tensorflow <span class="hljs-keyword">as</span> tf<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np<span class="hljs-keyword">import</span> cv2app = Flask(__name__)model = tf.keras.models.load_model(<span class="hljs-string">'model.h5'</span>)<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">preprocess_image</span>(<span class="hljs-params">file</span>):</span> img = cv2.imdecode(np.frombuffer(file.read(), np.uint8), cv2.IMREAD_COLOR) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (<span class="hljs-number">224</span>, <span class="hljs-number">224</span>)) img = img / <span class="hljs-number">255</span> img = np.expand_dims(img, axis=<span class="hljs-number">0</span>) <span class="hljs-keyword">return</span> img<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_class_label</span>(<span class="hljs-params">predictions</span>):</span> class_index = <span class="hljs-number">1</span> <span class="hljs-keyword">if</span> predictions><span class="hljs-number">0.5</span> <span class="hljs-keyword">else</span> <span class="hljs-number">0</span> class_labels = [<span class="hljs-string">"Normal Chest"</span>, <span class="hljs-string">"Pneumonic Chest"</span>] <span class="hljs-keyword">return</span> class_labels[class_index]<span class="hljs-meta">@app.route('/')</span><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">index</span>():</span> <span class="hljs-keyword">return</span> render_template(<span class="hljs-string">'index.html'</span>)<span class="hljs-meta">@app.route('/predict', methods=['POST'])</span><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">predict</span>():</span> <span class="hljs-keyword">if</span> <span class="hljs-string">'file'</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">in</span> request.files <span class="hljs-keyword">or</span> request.files[<span class="hljs-string">'file'</span>].filename == <span class="hljs-string">''</span>: <span class="hljs-keyword">return</span> jsonify({<span class="hljs-string">'error'</span>: <span class="hljs-string">'No file selected'</span>}) file = request.files[<span class="hljs-string">'file'</span>] img_array = preprocess_image(file) predictions = model.predict(img_array) class_label = get_class_label(predictions) <span class="hljs-keyword">return</span> jsonify({<span class="hljs-string">'prediction'</span>: class_label})<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>: app.run(debug=<span class="hljs-literal">True</span>)</code></pre><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705737188504/d4679373-18ee-403d-aa5f-a0d026bd40b2.png" alt class="image--center mx-auto" /></p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705737174081/2d5b965b-c802-456b-b0d7-0b5d45f3941b.png" alt class="image--center mx-auto" /></p><h2 id="heading-conclusion"><strong>Conclusion:</strong></h2><p>In this article, we explored the application of deep learning and computer vision in predicting pneumonia from chest X-rays. Leveraging the power of transfer learning with the VGG16 architecture, we built a robust model capable of accurate predictions. The model was then trained, saved, and integrated into a Flask web application, providing a user-friendly interface for real-time pneumonia predictions. This combined approach showcases the potential of cutting-edge technologies in automating medical diagnostics for improved patient care.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1705737154134/980a01e6-7d4a-4bbd-b303-5957aac5c8c2.png<![CDATA[About Me]]>https://blog.subeksharma.com.np/abouthttps://blog.subeksharma.com.np/aboutMon, 04 Dec 2023 10:25:34 GMT<![CDATA[<p><strong>Introduction</strong></p><p>I am an undergraduate student researcher pursuing a Computer Engineering degree at Paschimanchal Campus, Lamachaur, Pokhara, affiliated with Tribhuvan University. My academic interests span the fields of Deep Learning, Computer Vision and their use in interdisciplinary fields.</p><p><strong>Disciplines</strong></p><ul><li>Computer Engineering</li></ul><p><strong>Skills and expertise</strong></p><ul><li><p>Supervised machine learning</p></li><li><p>Artificial Neural Network</p></li><li><p>Convolutional Neural Network</p></li><li><p>Computer Vision</p></li></ul><p><strong>Languages</strong></p><ul><li><p>Nepali</p></li><li><p>English</p></li><li><p>Hindi</p></li></ul><p><strong>Contact information</strong></p><ul><li><p><a target="_blank" href="https://www.researchgate.net/deref/https%3A%2F%2Ftwitter.com%2Ftsubek?forcePage=true">subeksharma11@gmail.com</a></p></li><li><p><a target="_blank" href="https://www.researchgate.net/deref/https%3A%2F%2Ftwitter.com%2Ftsubek?forcePage=true">Twitter</a></p></li></ul>]]><![CDATA[Initialization Methods in Neural Networks: Exploring Zeros, Random, and He Initialization]]>https://blog.subeksharma.com.np/initialization-methods-in-neural-networks-exploring-zeros-random-and-he-initializationhttps://blog.subeksharma.com.np/initialization-methods-in-neural-networks-exploring-zeros-random-and-he-initializationWed, 21 Jun 2023 18:08:39 GMT<![CDATA[<h1 id="heading-introduction">Introduction:</h1><p>Neural networks have revolutionized the field of deep learning, enabling remarkable advancements in various domains. One crucial aspect that greatly influences the performance of a neural network is the initialization of its parameters, particularly the weights. In this article, we will explore three common initialization methods: Zeros Initialization, Random Initialization, and He Initialization. Understanding these techniques and their implications can help us design more effective neural networks.</p><h2 id="heading-zeros-initialization">Zeros Initialization:</h2><p>Zeros Initialization, as the name suggests, involves setting the initial weights of the neural network to zero. While it may seem like a reasonable starting point, this approach is not recommended. Assigning all weights to zero means that all neurons in the network would have the same output during forward propagation, leading to symmetric behaviour and preventing the network from learning effectively. Consequently, the gradients during backpropagation would also be the same, hindering the learning process.</p><p><img src="https://lh5.googleusercontent.com/QVaReNO1rheywJe9B9j7CSSTM_ws9xghQJU_c8y6ylf2kgwLyvwMGJ8vlVEezjjkCOeiUBYqhDxjtWFBW-sWtPMYaLCE8RzF-D667P5ch7xloan-SCmCHlfW9S-J2FkxtsmTbfci8jNHXpbiuTcXQws" alt /></p><h2 id="heading-random-initialization">Random Initialization:</h2><p>Random Initialization is a widely-used technique where the initial weights are set to random values. By introducing randomness, we break the symmetry and allow neurons to have different initial outputs. This enables the network to start learning diverse representations from the beginning. In practice, the random values are often drawn from a Gaussian distribution with zero mean and small variance. This ensures that the initial weights are close to zero and within a reasonable range, preventing them from becoming too large or too small.</p><p><img src="https://lh4.googleusercontent.com/XmIWaTz40iXu1UrEIY8YA80Fs1gfWQl5Mp56BRwHJTp18j65gCB2547Mn6qP_jUW40xSE4HYvstw-88edccHEvg7cy5tG-rJgMvxUwoU8PsM-YUDxlt6qcJjuxnGw8xCuH-YOW1PK_hPCSuxQLDVZ98" alt /></p><h2 id="heading-he-initialization">He Initialization:</h2><p>He Initialization is a popular initialization method proposed by He et al. in 2015. It is specifically designed for networks that use Rectified Linear Unit (ReLU) activation functions, which are widely used due to their ability to mitigate the vanishing gradient problem. He Initialization scales the initial weights by a factor of (2/n_l), where n_l represents the number of neurons in the previous layer. This scaling factor takes into account the variance of the ReLU activation and helps to keep it consistent across layers, facilitating stable and efficient learning.</p><p><img src="https://lh4.googleusercontent.com/MobqJ0WyXY75F8kj087lzUlFRlzHEf7rhK-CpjhrGD6kW391hSo5VzcewagRQf2Gh60ZtcTjxxYX95C_1bju8fquuDzzDT8rkENFuLntdd0vbJO055jOvsoJBbx45BbgCmBJi-6VuJPx2Y6ikPkRFc4" alt /></p><h3 id="heading-choosing-the-right-initialization-method">Choosing the Right Initialization Method:</h3><p>To initialize parameters in a neural network using these three methods, you can specify the desired initialization technique in the input argument of your neural network framework or library. For example:</p><p>- For Zeros Initialization, set <code>initialization = "zeros"</code>.</p><p>- For Random Initialization, set <code>initialization = "random"</code>.</p><p>- For He Initialization, set <code>initialization = "he"</code>.</p><p>It is worth noting that modern deep learning frameworks often have default initialization methods, such as He Initialization, to simplify the initialization process. However, it is still important to be aware of the available options and their effects on the network's performance.</p><h3 id="heading-conclusion">Conclusion:</h3><p>Proper initialization of parameters is crucial for the success of neural networks. Zeros Initialization is not recommended due to its symmetry-inducing nature, while Random Initialization and He Initialization are widely used. Random Initialization introduces diversity and breaks the symmetry, allowing for effective learning. He Initialization, tailored for ReLU-based networks, helps maintain consistent variance across layers. By understanding and utilizing these initialization methods appropriately, we can enhance the training process, enable faster convergence, and achieve better performance in our neural network models.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1687374355480/73b50135-43dc-4a0d-9893-7458b11965f0.jpeg<![CDATA[Use of Locality Sensitive Hashing (LSH) in NLP]]>https://blog.subeksharma.com.np/use-of-locality-sensitive-hashing-in-nlphttps://blog.subeksharma.com.np/use-of-locality-sensitive-hashing-in-nlpWed, 31 May 2023 12:52:30 GMT<![CDATA[<h1 id="heading-what-is-hashing">What is Hashing?</h1><p>Hashing is a technique used to assign values to a key, employing various hashing algorithms. One might wonder about the purpose of hashing in NLP. Well, when you're attempting to translate one language to another, let's say Nepali to English, and you're using word embeddings for both languages, there is a need for organizing similar words before training a model to identify them. This involves placing similar words from both languages into the same list, enabling the creation of a matrix that maps embeddings from one language to their corresponding words in the other language.</p><p>To achieve this, hashing is utilized to group similar words from both languages into the same bucket based on their word embeddings. However, using traditional hashing techniques does not guarantee that similar words will be assigned to the same bucket. To address this challenge, Locality Sensitive Hashing (LSH) can be employed. LSH facilitates the determination of similarity between data points and ensures that they are mapped to the same or nearby buckets, thereby accomplishing the grouping of similar words effectively.</p><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685539390876/be011a5d-4310-4629-80f0-7a6359662f8c.png" alt class="image--center mx-auto" /></p><h2 id="heading-heres-a-step-by-step-explanation-of-the-method">Here's a step-by-step explanation of the method:</h2><ol><li><p><strong>Plane Definition</strong>: The first step is to define a plane in the space. A plane can be represented by a normal vector and a point on the plane, or by an equation that describes the plane. The normal vector indicates the direction perpendicular to the plane.</p></li><li><p><strong>Position Calculation</strong>: For each data point, the position with respect to the plane is determined. This can be done by calculating the signed distance between the data point and the plane. The sign of the distance indicates whether the point is on one side or the other side of the plane.</p><p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685537401255/568f7a43-ded1-4884-8b2d-8ce7f6cbe040.png" alt class="image--center mx-auto" /></p></li><li><p><strong>Hashing</strong>: Based on the calculated position, the data points are then hashed into buckets. Typically, a hash function is used to map the position to a hash code or bucket. Data points with similar positions or on the same side of the plane are more likely to be mapped to the same or nearby buckets.<br /> We can use multiple planes to get a single hash value. Let's take a look at the following example:</p><p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685537325490/a57d6d38-c14f-4ce7-8200-da1ccda33d85.png" alt class="image--center mx-auto" /></p></li><li><p>Let's consider an example to illustrate how this hashing function works. Suppose we have 3 planes, and a data point has the following positions relative to the planes: h0 = 1 (on the same side of the first plane), h1 = 1 (on the other side of the second plane), h2 = 0 (on the same side of the third plane). Using the hashing function, the hash code for this data point would be:<br /> 2^0 *<em>1 + 2^1</em> *1 + 2^2 * 0 = 2 + 1 + 0 = 3<br /> So, the data point would be hashed to bucket number 3.<br /> The advantage of this hashing function is that it generates distinct hash codes for different combinations of positions, allowing similar data points with the same or nearby positions to be mapped to the same or nearby buckets. By adjusting the weights (powers of 2), you can control the importance of each position value in the final hash code.</p></li><li><p><strong>Indexing and Retrieval</strong>: The hashed data points are stored in a data structure, such as a hash table or hash map, where each bucket contains the data points that hashed to the same code. During retrieval, given a query data point, its position with respect to the planes is calculated, and the search is performed within the bucket(s) corresponding to the hashed position.</p></li></ol><p>This method leverages the concept of separating data points based on their position relative to the plane. Points that lie on the same side of the plane are considered more similar to each other, and therefore, they are likely to be hashed into the same buckets.</p><p>It's important to note that the specific implementation and choice of hash functions may vary depending on the requirements and characteristics of the data. The design of the hash functions aims to maximize the probability of mapping similar data points to the same buckets while maintaining an acceptable level of collision (different data points being mapped to the same bucket).</p><p>This approach can be particularly useful for certain types of data and similarity search tasks where separating data points based on their position to a plane is meaningful for capturing similarity.</p>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1685539588122/1783130e-4a60-4176-ae88-33c007c96609.jpeg<![CDATA[Understanding Naive Bayes for Natural Language Processing (NLP)]]>https://blog.subeksharma.com.np/understanding-naive-bayes-for-natural-language-processing-nlphttps://blog.subeksharma.com.np/understanding-naive-bayes-for-natural-language-processing-nlpSun, 28 May 2023 17:17:55 GMT<![CDATA[<h2 id="heading-introduction">Introduction</h2><p>Machine learning algorithms play a crucial role in the field of natural language processing (NLP). Naive Bayes is one of the popular and effective algorithms used in NLP. In this article, we will explore the fundamentals of Naive Bayes and its applications in various NLP tasks, highlighting its simplicity, efficiency, and effectiveness.</p><h2 id="heading-what-is-naive-bayes">What is Naive Bayes?</h2><p>Naive Bayes is a probabilistic machine learning algorithm based on Bayes' theorem. The term "naive" indicates the assumption of independence among the features. Despite this assumption, Naive Bayes has proven to be remarkably effective in many NLP tasks, such as sentiment analysis.</p><h2 id="heading-bayes-theorem">Bayes' Theorem:</h2><p>Before diving into Naive Bayes, let's first understand the underlying principle of Bayes' theorem. It provides a way to update probabilities based on new evidence and is expressed as:</p><p>P(A|B) = P(B|A) * P(A) / P(B)</p><p>Where:</p><p>P(A|B): Probability of event A occurring given event B has occurred.</p><p>P(B|A): Probability of event B occurring given that event A has occurred.</p><p>P(A): Prior probability of event A.</p><p>P(B): Prior probability of event B.</p><p><img src="https://miro.medium.com/v2/resize:fit:468/1*IGwM9cb8W-gyJW5rkiVQPw.jpeg" alt="Figure: Baye's Theorem " class="image--center mx-auto" /></p><h2 id="heading-naive-bayes-for-nlp">Naive Bayes for NLP:</h2><p>Naive Bayes finds application in various NLP use cases, including sentiment analysis, spam detection, document classification, and more. It leverages Bayes' theorem to calculate the probability of a given text belonging to a particular class, such as positive or negative sentiment. It is based on the frequencies of words or features in the given text.</p><h2 id="heading-training-and-classification">Training and Classification:</h2><p>Like other machine learning algorithms, Naive Bayes undergoes a training phase. During this phase, it learns the statistical properties of the text data by calculating the probabilities of different words or features occurring in each class. This information is then used to make predictions on new and unseen data.</p><h2 id="heading-naive-bayes-assumption">Naive Bayes Assumption:</h2><p>An assumption made in the implementation of Naive Bayes is called the independence assumption. It assumes that the presence or absence of a particular feature (e.g., a word) is independent of the presence or absence of other features. Although this assumption may not hold true for all NLP tasks, Naive Bayes can still yield good results in practice due to its simplicity and efficiency.</p><h2 id="heading-laplace-smoothing">Laplace Smoothing:</h2><p>Dealing with words that were not present in the training data is a common challenge in NLP. To address this issue, we can employ Laplace smoothing. It adds a small value to the word count, ensuring that no probability estimate is zero. Thus, we avoid getting null probabilities for unseen words.</p><h2 id="heading-pros-and-cons-of-naive-bayes-in-nlp">Pros and Cons of Naive Bayes in NLP:</h2><h3 id="heading-pros">Pros:</h3><ul><li><p>Simple and efficient to implement.</p></li><li><p>Performs well with high-dimensional data, such as text.</p></li><li><p>Can be trained with little training data.</p></li></ul><h3 id="heading-cons">Cons:</h3><ul><li><p>Sensitive to irrelevant features.</p></li><li><p>Struggles with rare or unseen words, despite Laplace smoothing.</p></li><li><p>The independence assumption might not always hold for certain NLP tasks.</p></li></ul>]]>https://cdn.hashnode.com/res/hashnode/image/upload/v1685343116394/f9f69cc2-1b0d-4a03-9203-abacdea26aa0.jpeg