Carbon Nanotube says Hello to us

Researchers at Massachusetts Institute of Technology (MIT) and Analog Devices have created a first programmable 16 bit carbon nanotube microprocessor. The transistors are CMOS logic with nanotube as channel instead of bulk Silicon. This processor containing 15,000 transistor could be a turning point in race to go beyond conventional computing using silicon.

Moving beyond silicon is not easy since no other technology has come closer to the reliability and scalability of current manufacturing process. One can get idea about how good we are at making transistors from the fact that we can add billions of transistors on a single chip.

Although the present 15,000 carbon nano-tube transistor won’t be able to compete with its silicon counterpart but its manufacturing was done with the technology that has been working in commercial world. So, that gives the computing field to move towards future beyond silicon.

Unsurprisingly, this chip was tested with ‘Hello world’ program by Professor Max Shulaker, who is the assistant professor in the institute. It will be interesting to see where does nano-tube based CMOS technology takes us.

Another opportunity for the field of computing

Structure of a protein

Proteins are like workhorses of our body, working each and every seconds. They perform wide range of jobs which includes repairing DNA and synthesizing new cells. Proteins are chains of twenty different amino acids. Each sequence of amino acid is unique and could be entirely different from each other. The chemical forces at molecular level in those amino acid chains give rise to a unique 3D structure, which plays an important part in their respective functionality.

For instance, hemoglobin’s shape allows it to bind oxygen when the protein is in lungs. Once it is inside any other cell, it changes its shape to release that oxygen and this happens thousands of times everyday.

The shape of any protein depends on sequence of amino acid and also temperature and pH of its surrounding. Researchers are more interested in translating amino acids sequence into their structure. However, traditional computing is often not enough to provide computational power to determine the shape of proteins from a given sequence of amino acids. Human beings are looking to make their own protein in order to simulate our immune system against a number of diseases and to deliver drugs to specific cell so that healthy cells are not affected, just to mention few of potential applications. In summary, we can say artificial protein synthesis could be next big thing. But before that, the field of computing needs to provided biologist a tool to analyze the forces acting on each atom and precisely predict the structure. One can safely say that computing field could open the floodgates of innovation in medical science and computer researchers should work together to seize this opportunity.

Self-assembling chip

Moore’s law (which is actually a prediction) has been a driving force to make computing more powerful and reliable with each passing year. We now have number of transistors equal to global human population on one inch by one inch processor. However, this miniaturisation is approaching the limit set by manufacturing technology and physics. This could act as a bottleneck to recent development in artificial intelligence and big-data.

The good news is that researchers have been searching alternate way to manufacture processors and few like Karl Skjonnemand have got some some success. In particular, Karl’s approach is inspired by nature which has been synthesising complex molecules like DNA with high precision.

Click here to watch TED talk- The Self Assembling Computer by Karl Skjonnemand.

Artificial Intelligence: Not just another technological advancement

Source: Google Images

Few discoveries and invention have left a such tremendous mark on human history that they changed the we live and interact with the world. These includes invention of wheel, adoption of farming, learning to control and use fire for our use, especially cooking; printing press, steam engine and now we have artificial intelligence. In my opinion, we are lucky to witness the inflection point where we are witnessing the rise of such technology.

The fact that our ancestors started to have cooked food allowed them to get more nutrients per meal as compared to the scenario when raw food items were consumed. In a way, we owe our brain, which is the most complex thing known in this universe, to the fact that our ancestors started eating cooked food, which is just a normal thing today.

Artificial intelligence is one of the key technology which will change the way we make decision and access information. However the sheer potential of AI is the reason why we should be aware of its power and where to use it.

AI or more precisely Narrow AI is good at going through a large amount of data at great speed and make a recognize similar pattern in new database. This divine power is enough to tempt us to use AI in every area of application where key critical decisions are required to make. This is where we need to stop and take a look at possible repercussions when AI is being used.

AI can be really useful where the solution to the problem is data-driven. For example, we can use data from self-driving cars that were involved in accidents and the inference by AI could help to make those cars safer. The volume of data generated could not be processed by humans, so technology can be a real boon in this case.

Let’s take another area where AI is expected to bring a revolution: Medical science. One can train AI with thousands of medical images and teach it to spot anomaly so minute, that human eyes often miss it. We can train artificial intelligence with set of symptoms and once trained it could make prescription or suggest some test. One could think that AI will bring down health-care cost incurred by government and at the same time patients will be provided best service ever. However, there is a caveat: Medicine is not just a science inundated with fancy jargon. Doctors need to make terrified patients comfortable which cannot be taught in school or could be programmed in machines (yet). Secondly, any mistake by AI could be disastrous for patients. There is a chance that AI could end up serving its developer’s profit, rather than serving patients. It is quite difficult to understand by AI made that decision because the process is too convoluted and no-one wants a doctor whose decision could not be validated.

To summarize, artificial intelligence is one of the powerful tool ever developed in our history of existence. We need to be really careful where to use and when to use it.


AI program predicts shapes of proteins

Human programmed cell death protein 4, molecular model. This protein is involved in apoptosis (programmed cell death).

Google’s DeepMind project has added another feat in its name. The AI program, AlphaFold has been able to predict the 3D shapes of proteins, which is the fundamental unit of any life.

Researchers have found it hard to understand protein folding and it is not easy to stimulate it in even supercomputers because of high computation demands. Now, AI has the potential to aid researchers and scientist to study proteins in details and get few steps to understand and find a cure for diseases like diabetes, Parkinson’s disease and Alzheimer.

DeepMind trained their AI programme which is a neural network on thousands of proteins until it could predict 3D structure from amino acids. Afterwards, when the program was provided with the input, it used the neural network to predict the distances between pairs of amino acids and angles between chemical bonds that connect them.

This new ability of AI will open doors to new research opportunities in medical science and help us understand the fundamental unit of life which has been an arcane subject for long time.

AlphaFold: Using AI for scientific discovery


AI Accelerator

ai_image_2

When we think about the processor, we think about sophisticated circuitry inside our versatile computers which can do a complex calculation, help us make logos, create documents and presentation, play games to name a few.  Designers have thought of processor as a versatile circuit and intended to add more computing power in a single die. However, with new applications like machine learning and artificial intelligence, Continue reading →

Programming language for living cells developed by MIT

create_living_cell_wide

Researchers in MIT have developed a programming language exclusively for organic cells to design biological circuits, similar to hardware description language for computer hardware. One can now program the cells to respond to a  particular environment conditions.

The programmers need not have details about genetic engineering and about delicate interaction between cell body and environment. It will allow researchers to develop Continue reading →

Biological Computing

bio_cpm

The search for new computing paradigm is encouraging researchers to explore uncharted territories. Researchers have been working on using network-based computing and highly efficient molecular motors. The main advantage of this approach is extreme low power and it could solve the problem with high parallelism.

One could argue that quantum computer also promise of similar promises regarding performance and power consumption. However, we need to understand that quantum computers need almost zero kelvin temperature and on the other hand, bio computing do not fave this stringent demands of operation environment.

In this bio-computer, protein or any other bio-molecular  solve the problem by moving through nano network of channels which is designed according to the algorithm, it intends to implement.

”The biological computing units can multiply themselves to adapt to the difficulty of the mathematical problem”, explains Till Korten, Ph.D. from TU Dresden, co-coordinator of the project. These biological computing units act like processor and memory as they traverse the network.

One can click here to know more about the development.

 

Multi-Chip module GPU

gpu

GPUs has played and will play an important role in the development of high-performance computing

No doubt, Graphic Processor Units will be key to develop artificial intelligence and is required in conditions requiring extensive computing resources. GPUs are becoming more powerful with each year, and Moore’s law can be attributed for such improvements over the year. However, with the law slowing down, the number of transistors that can be fit in a die-size won’t grow, leading to a halt in performance improvement. This could impede the progress of many promising technological advancements.

Researchers have proposed Multi-Chip Module GPU (MCM-GPU) architecture based on aggregating GPU modules rather than using single high-performance chip used at Continue reading →

Computational dreaming

In any computing device, we provide the input and the device process the data and provide the data. It works as long there is input provided and won’t work even for a single minute when we stop providing. Consider a system that can work continuously to solve a problem, even when there is no input. It can work on user input when it is available and when the device is not being used, it can work on how to optimise the solutions it provided earlier. It is similar to what humans do. We tend to solve problems at hand, and often we try to optimise it in our mind even we are idle or asleep.

Researchers at Virginia Tech have developed computation method- called computation dreaming, in which device work on real-time inputs during ‘Day phase’ and try to optimise the solution during ‘Dream phase when it no longers accepting any input.

Such computing method can be used for AI applications which require massive Continue reading →