Olive Branch Technology
  • Services
    • Management Consulting >
      • Professional Developmet
    • Big Data/Analytics
    • Software Engineering
  • Dollar Dashboards
  • News and Information
  • Contact
  • Learning

The Compiler

Technical knowledge for non-technical professionals

Quantum Computers - What You Need to Know

6/22/2018

0 Comments

 
Quantum computers. I’m sure you’ve heard about them - about how they will revolutionize computing and about how blazing fast they will be. The term quantum computer has been tossed about enough that most people have some kind of vision of these lightning fast machines in their head. But do you really know what a quantum computer is or what its impact will be?

Qubit

A classic computer, like the one you are using right now, uses a bit as its basic building block. A bit can have two values, 0 or 1. Internally to a computer, it is a switch that is either flipped on or off to allow electrical current to flow (or not flow). Classic computers combine multiple bits to represent useful pieces of information.

Quantum computers are built on qubits which can have a value of 0 or 1 just like a classic computer. However, a qubit can also be both a 1 and a 0 at the same time ... crazy. Rather than switches, a qubit is stored in something much smaller, such as the spin of an electron. This works because at this very very small level, quantum physics takes over and, as strange as it may sound, something like an electron can exist in two different states at the same time.

Ummm… so? What's the big deal about a qubit?

So a quantum computer uses qubits … why does that make it so great? All too often, descriptions of quantum computers stop here and we are left with this tremendous leap from, “it uses qubits” to “it's super fast”. But that is quite a leap and is certainly not intuitive.

A qubit can be both a 0 and 1 at the same time, which means that two qubits can be a 00, 01,10, and 11 at the same time. Three qubits can be 000,001,010,011,100,101,110,111 at the same time, and so on. In a three bit classical computer, you would need 8 three bit storage slots, or 24 bits to hold this information whereas in a quantum computer, you just need your three qubits.

To make things even more interesting, because we can store all of these values in the same three qubits, we can operate on them all at the same time as well. So in a classical computer, if we want to manipulate our three bit words, we need to do it one word at a time. In our quantum computer, all this information is held simultaneously within our three qubits, we can operate on them all at the same time as well!

So, 3 qubits can potentially represent 24 bits of data. 4 qubits can represent 128 bits of data, 5 qubits can represent 1024 bits (1 KB) of data, 6 qubits can can represent 4KB of data. If we keep going, it gets pretty scary. 50 qubits can represent 1,267,650,600,228,229,376 TB of information.

That isn't really true - it is a (qu)bit deceiving

When we put it this way it may seem as though we are dealing with some magical, huge, powerful hard drive. That is not true. While 50 qubits have the potential representing 1,267,650,600,228,229,376 TB of data simultaneously, You cannot save and retrieve that much data. Because I am not fond of writing out 19 digit numbers, let’s go back to our three qubit example.

Our 3 qubits have the potential of being any and all of 8 numbers at the same time. When we try to read the value of our qubits it isn't like reading three bits from a 24 bit hard drive. The data we read may not even be what we expected. If I try to store the value 101 in my three qubits and then try to read the values back out, I may not get back what I entered. I may read 111 or 110, or any of the eight possible combinations. When we aren't looking, our qubits can certainly be in both states but as soon as we look, the qubit immediately settles into one of its two readable states - a one or a zero.

At this point any excitement you may have felt about the amazing powers of the qubit may be fading, instead feeling that it is about as useless as it gets. Stay with me.


When our qubits are talking to each other - one tiny little quantum particle chatting with another tiny little quantum particle - they can exchange information without losing their superposition (being in both states at once). So we can construct quantum circuits and quantum logic gates so that this vast volume of information can be processed. We just can't observe what is happening inside. We have to be content with only reading the result of whatever calculation is happening.

Not a replacement for your computer

Because quantum computers operate in such a fundamentally different way, they really aren’t candidates for replacing the computer you are using right now. Everything you currently use your computer for - reading email, writing the next pulitzer prize winning novel, doing your taxes, watching cat videos - won’t benefit from quantum computing. In fact, a quantum computer would be much slower.

There are, however, certain problems that classical computers are very bad at. For example - prime factorization. Now … set you’re wayback machine to your early education. Remember having to find factors of numbers? Say … given the number 12, the factors are 2 & 6 and 3 & 4. Prime factors are factors of a number that happen to be prime (can only be divided by 1 and itself).   For example, the prime factors of 15 are 3 & 5. For really big numbers, it is almost impossible to find the prime factors in any reasonable amount of time. For example, the prime factors of 1,050,809,297,549,059,047,257 are 32,416,187,567 & 32,416,190,071.

First, stop pretending that you knew that. Second, any algorithm for finding prime factors takes a ridiculous amount of time to complete. For this reason, prime factorization is used heavily in cryptography - keeping your data safe when it moves across the internet.

A quantum computer because it can represent so many numbers in such a small space and because it can operate on those numbers all at the same time, can solve these types of problems reasonably quickly.

As you might guess, our friends in the intelligence agencies are keenly interested in quantum computers. However, there are many similarly complex problems in fields such as chemistry and biology. So beyond just cracking codes, the potential for advancements in medicine and other sciences is tremendous.

Not ready for prime time

As you can imagine, the complexity involved in using something like the spin of an electron to store data makes building a reliable quantum computer rather challenging. Over the last few years, the giants in the computing industry such as Google and IBM have been making huge leaps forward but quantum computing is still in its infancy.

The quantum computers that exist now … and yes, they do most definitely exist, cannot outperform classic computers. When you run these really hard problems through our best and fastest classic computers, the quantum computers are still slower - they have not delivered on their promises … yet. We don’t know when they will begin to perform - if  you read the press releases it may be in the next few months. We’ve heard that before, though.

A very different kind of machine

In general quantum computers are viewed as replacements for our current computers. Our heads are filled with visions of ultra fast computers on or desks, in our bags, and in our pockets. 

It isn't a box sitting on a desk. The engineering involved in manipulating quantum states, keeping control of individual atoms, keeping superconductors cold, and working with lasers makes for an interesting looking machine. This is not a MacBook. In fact, the computers look more like the spaceship from Close Encounters of the 3rd Kind then they do a classic computer.

The truth is, quantum computers, in theory, will be really good at a set of problems that classical computers are really bad at solving. Conversely, quantum computers will be really bad at doing those activities that our classical computers are good at.


These machines are complimentary, so don’t go selling your Intel stock yet.

The future of computing?

In the 1950’s when a computer filled an entire room it would have been hard to imagine that within 60 years computers that are thousands of times more powerful than those original behemoths would fit into our pockets. Quantum computers are barely even in their infancy so it can be hard to predict where they will be in 50 years. Maybe in 50 years you will have a quantum computer in your pocket, or maybe embedded under your skink. Or not. A lot can change in 50 years.

At least now you have a better idea of what quantum computers are and what they might offer. It isn't magic. It's physics.... quantum physics.

Cheers,
Jim Conigliaro
Olive Branch Technology
http:///www.olivebranchtechnology.com

$10 Word

Superposition

Superposition is a principle of quantum physics that allows any two quantum states to be combined to form a new state. The reverse is also true - any quantum state can be broken down into two other quantum states. This is what allows our qubits to be both a 1 and a 0 at the same time.

Another $10 Word

Entanglement

Quantum entanglement occurs when two particles become linked (entangled) such that a change to one particle's quantum state is instantaneously reflected in the other particle - no matter how far away they are. Quantum computers can use this to allow qubits to interact and to communicate at great distances without delay. Einstein described this behavior as "Spooky." 

Sooooo cool

Quantum computers are cold... really cold. Like absolute zero cold. The particles we're dealing with are so small, that vibrations & interference caused by heat - any heat at all - messes with the system. 

Thanks Dr. Feynman

Though quantum computers are a new and yet to be proven technology, they were originally proposed in in 1959 by Dr. Richard Feynman in a lecture where he proposed using quantum particles for computing. 22 years later, in 1981, he published a paper in which he proposed the basic model for a quantum computer.

0 Comments

File Compression - more interesting than you think

5/17/2018

0 Comments

 
When you saw that today's topic was file compression, I'm sure you dropped everything, cleared your schedule, and got ready to sit back for an exciting read. Or ... more likely ... you’re about to hit the delete button. Hold on, don’t give up on me yet. Understanding file compression probably won’t help you all that much in your day to day life; it won’t help with an RFP or assessing a software solution; it won’t help you work with your IT teams. This falls squarely into the “gee wiz” category. If you stick with me, you might learn a little something that helps you speak geek better than most.

Some vocabulary
Before we dive into the guts of a file compression algorithm I want to cover a little basic computer science vocabulary.

Bit. A bit is the smallest unit of data a computer deals with. It can have two values: 0 or 1. Sometimes you will see them represented as a false (0) or a true (1),

Byte. A byte is a sequence of eight bits. That’s it, just a sequence of zeros and ones. For example: 10011011.

Character: A byte, or sequence of bytes, that represents some letter or symbol.  For example, the letter A may be represented as ‭01000001‬ - a sequence of eight bits (a byte) that we all agree should represent the letter A.

Encoding: A system, or standard, for converting bytes into symbols or characters. Unless the whole world can agree upon how to convert a sequence of bits into a letter or symbol, we’ll have a heck of a time sharing information. So the world has agreed upon certain ways to encode bytes into symbols. One of the most basic encoding standards is the ASCII standard. 01000001 represents a capital A in ASCII encoding so if you hand any computer in the world the sequence 01000001 and tell it that it is an ASCII character, that computer will recognize it as A.

Puzzles and Trees
If you look up file compression algorithms you will find an extraordinary amount of math. Understanding the math is necessary of you are going to build software to implement the algorithms. But if you just want to understand the basic concepts of how they work, you don’t need complicated math.

Time for an example. Let’s say we have a text file containing the words: 
add all apples.  If we encode this using the ASCII standard it becomes:

01100001 01100100 01100100 00100000 01100001 01101100 01101100 00100000 01100001 01110000 01110000 01101100 01100101 01110011

Including spaces, we’ve got 14 bytes of data. But with so few characters, we really don’t need a full byte to represent each character, we’re just doing that to comply with standards. Let's list each character that we're using and how many times it occurs in the string.


[space]  - 3 occurrences
a - 3 occurrences
l - 3 occurrences
d - 2 occurrences
p - 2 occurrences
e - 1 occurrences
s - 1 occurrences

Now what if we make up our own encoding mechanism that assigns fewer bits to characters  that occur more often.

[space]  - 0
a - 1
d - 00
l - 01
p - 10
e - 11
s - 000

Here's our new file: 100000101010110100111000

We just reduced our file size to 24 bits - or three bytes.

Not quite that easy
Unfortunately, life isn’t always that simple. For a compression algorithm to work you need to be able to reverse the process to get the original file back. In our example above, we can’t do that.  For example, if we see two positive bits … 11 in the compressed file, is that two a’s or is that a single s?There is no way to find out.

Back in 1951 a student at MIT, David Huffman, developed an algorithm to handle this problem as part of a class project.Rather than just create a lookup table of shortened codes to translate bytes to characters, he encoded all the information into a graph, specifically a tree. I made an example of a tree below.
To determine the value for any given character, simply follow the path from the root at the top to the apropriate node at the bottom. For example, to get the letter "a" we simply follow the path 0-0, to get the letter 'p' we follow the path 1-1-0-1. Our new character map looks like this:

a - 00
[space] - 01
d - 10
l - 1100
p - 1101
s - 1110 
e - 1111

Using this coding our file now looks like this:

001010010011001100010011011101110011111110

42 bits, or  a little more than 5 bytes.  Not as good as our first attempt, but this time it is reversible (we call this lossless compression). We just need to start at the left and read each bit and follow the tree until we find a character.

The first bit is a 0, so we start at the top of the tree and move left, following the zero. The next bit is also a zero so we move left again and that brings us to an ‘a’ … first letter is a. The next bit is 1, starting back at the top of the tree we follow the 1 to the right. The next bit is 0, so we follow the 0 to the left and end up at ‘d’, so now we have “ad”.  The animation below shows how we can continue this process until we have recreated our original text: add all apples.
It's everywhere!
Often we think of file compression as something we do to a file after we save it, such as when we zip a file or files to make them smaller to send over email. But compression algorithms are built into much of what we do now. Image formats (png, gif, bmp, etc.) all have their own specific compression algorithms that are built into the format - that way the images take up less space on your drive and, more importantly, use less bandwidth with transmitted. Speaking of bandwidth ... ever stream a video? File compression is at the heart of all of your video streaming services, making it possible to efficiently transmit videos to your tv through the internet.

Hug a computer scientist
You have them to thank for making it possible to get all that content to your computer, phone, and smart tv without breaking the bank. The alorith we reviewed today, Huffman Coding, is just one of many many compression algorithms. Some are general purpose, others are designed specifically for certain types of data such as audio or video. Where they were once made for convenience - saving some space on your hard drive - they are now an integral part of how we move information around the internet.
Need help with a project? Need an extra pair of eyes on our RFP? Need an expert perspective on your technology strategy? Give me a call.

$10 Word

CODEC

Compression techniques involving an encoder to compress and a decoder to decompress. Used in image and video compression. The word is made up of encode and decode ... codec.

Lossy Compression

Some compression algorithms don't decompress a file with 100% accuracy. Information is lost. Before you write this off as a bad idea, consider this. JPEG, MP3, and WMV formats are leverage lossy compression - some of the most important media formats all use compression formats that lose data.

Morse Code?

Morse code is an early form of data compression. The codes used for more commonly used letters are assigned shorter codes than the lesser used characters. This makes transmitting Morse code mor eefficient.

0 Comments

Demystifying Artificial Neural Networks

4/25/2018

0 Comments

 
An artificial neural network is a computer system that is modeled after biological nervous systems. As impressive as that may sound, an artificial neural network is, at best, a basic approximation of the biological equivalent. Think if it as being inspired by biological nervous systems, rather than mimicking ... much the same way an airplane was inspired by bird flight.

A little history ...
Artificial neural networks, and the larger field of artificial intelligence, may seem like recent technological accomplishments but they were first proposed in the 1940’s. Work in artificial neural networks has fallen in and out of favor over the last 80 years due to setbacks or advances in computing technology. As recently as the 1990’s they were largely considered impractical for handling complex problems. It was only after recent breakthroughs in large scale computing and data storage that artificial neural networks proved practical. Today, highly complex artificial neural networks are at the heart of systems such as facial recognition, speech processing, and self-driving cars.

But what is an artificial neural network?
A big algorithm.

I'm sorry, were you wanting something more? Essentially an artificial neural network is a highly complex math problem. It is made up of many many much smaller ... and much simpler math problems that when linked together make a wonderfully complex piece of technology. Did you ever see one of those huge Lego sculptures? You know those 15 foot tall replicas of modern architecture build entirely from Legos? If you look close enough, you see that those sculptures are made of up may many simple Lego blocks - not those fancy ones we see in our kids' sets - we're talking simple little bricks. Think of an artificial neural network like that.

We call those little building blocks that make up an artificial neural network neurons.You give a neuron some numbers, the neuron crunches the numbers and then spits out a result. If you hook up enough of these neurons together you can do things such as identifying a person’s face in a photograph.


That's a big leap ... how is that possible?
Time for a simple example. Let's say I want a smart cooking thermometer that will tell me just when my steak is done how I like it. I’m a medium-rare kinda guy so the light should probably go on at about 140 degrees. I made a little picture of how we can do this with one neuron. It takes the temperature as an input and turns on (fires) at 140 degrees
That might be a little too simple and also leaves me at risk of my steak being overcooked.  If I’m not paying attention, I might miss when the light turns on, so my steak heats to 200 degrees and I feel like I’m eating a chunk of charcoal.

I’ll add another neuron, this one fires at 155 degrees. Then I’ll add a third neuron that adds the output of the first two and buy myself a fancy little light that is off when the output is zero, green when it is one, and red when it is two. Now I know when my steak is cooked just how I like it and when it is overdone. I made another little picture. Here you can see my three neurons linked together and a graph of the output.
Ummm... how is that artificial intelligence?
This is a freakishly easy problem that isn’t worthy of using any form of artificial intelligence. But what if there are other considerations? What if I prefer my steak a little more well done in winter, and more rare in summer? What if it changes depending on whether I am cooking on charcoal or gas? Other factors? How about humidity, how much sleep I’ve gotten, cut of met, time of day, who I’m eating with, where I am eating, what I am having on the side, what I'm drinking with the steak, how comfortable my shoes are, and so on.

There may be hundreds of factors that determine what the ideal internal temperature of my steak for any given meal. Once you build an artificial neural network big enough to handle the real complexity of the problem it gets so complex that you could never design one by hand ... it must learn!


And so the machine learns...
That's right. Artificial neural networks can learn. We're anthropomorphizing a bit. What is really happening is an adjustment of the network to best fit a set of sample data. Typically, the machine learning process for an artificial neural network will involve collecting sample data - and a lot of it - to train the network. What we do is feed our sample data into the network, compare the output to reality, and then measure the error. There are algorithms we can use to adjust the guts of the artificial neural network such that the error will be slightly reduced next time we feed in that data. We do this over ... and over ... and over ... and over. Hundreds, maybe thousands, maybe hundreds of thousands of times.
This process leverages a branch of mathematics called optimization theory. I won't go into details of exactly how this works. That would involve calculus and I think we can all agree that now is not the time to delve into calculus 101.

So... now what?
Artificial neural networks are highly versatile and effective for a wide variety of applications. If your organization would like to take advantage of the technology, you no longer need to employ a team of AI researches to make that happen. Cloud based artificial neural network technology is available from providers such as Amazon, Google,  and IBM. There are lots of open source solutions available as well, but those don't run in the cloud and require your own computing power - for smaller projects this is just fine, but for complex problems that can be a challenge. If your teams, colleagues, friends, or relatives need recommendations - give me a shout.

Well that sounds easy!
Yes and no. You no longer need to know the intricacies of an artificial neural network to leverage the technology. Until recently you needed to know all the math, you needed to know the training algorithms (remember ... lots of calculus) and you needed to have the computing power. That is no longer the case. However, you still need to understand how to build the network (size, complexity, etc.)  and the ability to acquire and build the  necessary training data. This requires plenty of know-how. Still ... this is far more attainable than ever before!

Buyer beware
An artificial neural network isn't always necessary to solve a problem. In fact, if not applied correctly or to the right problem an artificial neural network may perform worse than a more traditional solution. With artificial intelligence being such a hot topic lately, and with the technology now available to anyone, it can be easy to throw a neural network into a system just for marketing purposes. Don't let that fool you. Ask questions, make sure that the folks using the technology know what they are doing and why.

Does this help?
I hope so. Any time there is a leap forward in technology, that leap it is accompanied by a lot of buzz. Having a little more knowledge can help you ask the right questions and make the right decisions and not be swayed by a good marketing campaign. If you need help getting started on your own efforts or if you need help in your own purchasing decisions/RFPs feel free to contact me.(yes... I realize that is my own marketing plug).

Tidbits....

Machine Learning

The process used to create algorithms based on large sets of sample data presented repeatedly to an AI program 

Deep Learning

A type of machine learning that leverages enormous data sets (even by today's standards) and a complexity of computation that had previously been impractical to handle more abstract problems such as language translation and broad visual recognition. Because of the computational resources needed, heavyweights such as Google, Microsoft, and IBM dominate the field.

$10 dollar word

Heuristic

A way to score the correctness or strength of a solution so alternative solutions can be quickly compared and ranked. Often used to search for the best answer, solution, or in the case of games, make a move.

0 Comments

IP Addresses

2/14/2018

1 Comment

 
IP Address ... this is another one of those technical terms that you will frequently hear and won't give much thought to. We all know what an IP address is ... right? Or should we say that we are familiar with the term IP address. More likely than not, most folks don't really now much about them.

The "IP" in IP address stands for Internet Protocol.  Breaking down the term: Internet is an interconnected network of computing systems. Protocol is a set of rules. Putting them together, the Internet Protocol is the set of rules governing how computing devices exchange information over an interconnected network. I.E. it defines how computers communicate over the internet.  This an IP Address is simply the addressing scheme used by computers to communicate with one another over the internet.

We've all probably seen them, or at least would recognize the first version of them. You know ... 184.232.12.31 ... or something like that. Interestingly enough, there are now so many devices connected to the internet that we ran out of addresses. To solve that problem, the internet protocol was updated (to version 6) which supports a newer form of IP address. This one you may not be as familiar with, it looks something like this:

43a0:1fe4:9011:ffc0:1843:fa43:f0f0:0001 

This should last us a while. There is enough space in this addressing mechanism to give every grain of sand on the planet its own IP address ... and have plenty left over.

The IP address is used to uniquely identify a computer on a network and allows messages from one computer to reach another computer. Internet Routers are responsible for moving the messages around. (I've got a small explanation of how routers work, if you are interested.). An IP address is hierarchical in nature - providing identifying information about both the network a device belongs to as well as the specific device on the network.

This should be sufficient so that you can now say you know what an IP address is. If you really want to dig in and understand how they are structured and the history behind the creation of IP addresses, the good folks at ICANN have written a more thorough and detailed beginner's guide to IP addresses.


1 Comment

What is a router

2/1/2018

2 Comments

 
There are certain technology, certain words that we hear often enough that they achieve a certain familiarity in our minds without us really understanding what they are. A router is one such piece of technology. You hear the word in the context of your home internet connect ("Make sure the lights are green on your router").  You may have heard about problems with routers at your place of employment. Perhaps a news article about problems with routers being attached and causing internet outages. It is time you know what they do.

A router is simply a piece of hardware - a very specialized computer - that is responsible for connecting computer networks and passing messages along from one network to another. The route messages on the internet ... hence the name ... router.

The internet runs on the back of an addressing scheme called IP addresses. There are IPv4 addresses - four numbers separated by periods, e.g. 193.211.104.5. Then there are the newer IPv6 addresses - a series of 8 hexadecimal numbers separated by colons (e.g. f3aa:2010:aac9:53b0:01a9:aacb:0001:f00a). These addresses, much like our home addresses, have meaningful structure that allows a message to get to just the right machine.

When a message leaves your computer, for example, when you request a web page, your computer will send a message to the router on your network. This message has the IP address of the destination computer attached to it. Your router will take a look at that address and see if it knows which computer is associated with the address. If it does, it will send it directly to that computer. If it doesn't it will try to find another router that is closer to ... or at least knows more about ... the address your message is attempting to reach. That router will in turn do the same - if it knows the address it sends it to the destination, otherwise it will pass it along to another router - again - one that is closer to or knows more about the address than itself. Thus each time the message moves from router to router (called a hop) it gets closer to its destination until it finally reaches it.

Let us say that you live on a fictional island of Malwaria. You need to send a letter, a request for some information, to someone in the United States - the address; 143 West Happy Street, Smartville, Indiana, United States. Being from this small island you know little of the United States but you do know where your local post office is. So you take your letter there. The postal carrier does not know much more about the United States, but she does at least know where the country is. So she puts the letter on a boat headed for the U.S. mainland. There it lands in the hands of someone that hasn't the foggiest idea of where Smartville is, but certainly knows where Indiana is ... and so your letter makes its way there.  Once in Indiana, your letter sits in the hands of someone who knows little of the streets in Smartville, knows where Smartsville is. So he brings the letter to Smartsville where he hands it off to someone at the local post office. There a mail carrier, very knowledgeable of the addresses in the town takes the letter directly to 143 West Happy Street. The response will make it back to you in very much the same way.

This is very much how a router works ... but on a time scale of milliseconds. They are very simple, single purpose devices that when connected to one another allow your computer to communicate with other computers around the world - without really knowing where the computers are located.



2 Comments

    Archives

    June 2018
    May 2018
    April 2018
    February 2018

    Categories

    All

    RSS Feed

Home

Management Consulting

Big Data & Analytics

Software Engineering

Conta​ct

Copyright © 2016, Olive Branch Technology
  • Services
    • Management Consulting >
      • Professional Developmet
    • Big Data/Analytics
    • Software Engineering
  • Dollar Dashboards
  • News and Information
  • Contact
  • Learning