Wednesday, 16 September 2015

Bouncing Ball Program in JAVA using Multithreading and Applet

Execution of Bouncing Ball Program
Bouncing Balls
          In the last post we learned what is & how to use Multithreading in JAVA. The next ideal task to learn is how to use multithreading for Animation or how to use Multithreading in Applets. Today we will discuss program of Bouncing Balls in Java by using Multithreading and Applet.  I am going have a button to start the Balls from one point in the Applet window. We need Applet class, AWT class to perform this particular task.








Monday, 7 September 2015

Multithreading in JAVA using Runnable Interface

                       After learning How to perform Multithreading in Java, the next step is to learn the same by using Runnable Interface. As explained in earlier post (Multithreading in Java) a class needs to extend Thread Class to achieve Multithreading. But when we want to do the same by using Runnable Interface, we must understand that Runnable is a interface not a class.

Multithreading in JAVA

                     JAVA is a very powerful language which supports many features. Multi-threading & Graphics programming are only few of them. Today, I am going to discuss a very simple program to explain both these concepts. Also I will share code with you on this post as well as on my GitHub account. Lets start the discussion with Multi-threading. Multi-treading allows programmer to use computer's resources in a very efficient manner. A Multi-threaded program is one which contains 2 or more parts that can run concurrently. Each part of such program is called a Thread & it defined the separate path for execution. Now a days all modern Operating Systems supports Multitasking. Multi-tasking is basically divided into two types: Process Based &  Thread Based.

Saturday, 8 August 2015

How To Install CUDA on Fedora 20

Fedora 20 & CUDA

          Today, I am going to discuss about the installation of CUDA 6.5 Toolkit & Drivers on Fedora 20. From last few days a number of query regarding this problems are being asked. And Yes, Installation of CUDA Toolkit on Fedora 20 (Heisenbug) release is really a tough and challenging task. There are number of problems encountered during the installation of CUDA Toolkit.


Friday, 17 July 2015

Vector Arithmetic Operations in CUDA

After learning how to perform addition of two numbers, the next step is to learn how to perform addition and subtraction of two Vectors. The only difference between these two programs is the memory required to store the two Vectors. Also the kernel functions to perform the task.

Sunday, 12 July 2015

Addition of two numbers in CUDA: A Simple Approach


Addition is the very Basic & one of the arithmetic operation. To perform it in C language is also a very easy and simple task. In this post, we will convert the C language code into a CUDA code. The steps to remember for writing a CUDA code for any program are as follows:

Sunday, 5 July 2015

Two Dimensional (2D) Image Convolution in CUDA by Shared & Constant Memory: An Optimized way

          After learning the concept of two dimension (2D) Convolution and its implementation in C language; the next step is to learn to optimize it. As Convolution is one of the most Compute Intensive task in Image Processing, it is always better to save time required for it. So, today I am going to share a technique to optimize the Convolution process by using CUDA. Here we will use Shared Memory and Constant Memory resources available in CUDA to get fastest implementation of Convolution.

Tuesday, 23 June 2015

Two Dimensional (2D) Image Convolution : A Basic Approch

Image Convolution is a very basic operation in the field of Image Processing. It is required in many algorithms in Image Processing. Also it is very compute intensive task as it involves operation with pixels.

Thursday, 28 May 2015

What is CUDA?

NVIDIA-CUDA

CUDA (Compute Unified Device Architecture) is invented by NVIDIA Corporation Pvt. Ltd. First release of CUDA was in the year mid 2007.
CUDA is a Parallel Computing Platform one of its first kind which enables General Purpose Computing (widely known as GPGPU) in a very efficient and easy way. CUDA enables user to exploit the computing power of Graphics Processing Unit (GPU) present in underlying hardware. Before the inception of CUDA, one had to use DirectX or OpenGL

Wednesday, 27 May 2015

One Dimensional (1D) Image Convolution in CUDA by using TILES

          Tiled algorithms are a special case in CUDA as we can Optimize the algorithm implementation, by using this strategy. It is very useful when we want to achieve maximum usage of available GPU hardware, present in the system. It has several advantages over naive CUDA implementations such as improved Memory bandwidth, reduced memory read/write operations,etc. Tiled implementation uses Shared memory available in GPU hardware which is much faster as compared to Global Memory in GPU. In any naive CUDA implementation only Global memory is used for all read and write operations. So, if these memory (read/write) operations are huge in number then the more time is wasted only in transferring the data which results in low/poor performance.