RISC or CISC, which approach works better for you?

RISC is Reduced Instruction Set Computing and CISC is Complex Instruction Set Computing.  Both are the key stone of modern computing but which is better?

This article introduces the area and provides to non-technical people a basis for discussion.

  • A very simple instruction
  • Quick history and challenges
  • Modern applications

This article makes reference to “The Minions” characters from the Despicable Me (2010) movie franchise.  Sorry to disappoint but this article isn’t about the movie at all.  You can find more about the movie here.

This article also uses the city analogy of computers I wrote in another article which can be read here.


[read more=”Read more” less=”Read less”]

RISC and CISC, A very simple instruction

Is it better to have one really amazing department in your government who can do everything in your business or a thousand departments who can do a much smaller set of jobs?

Most managers would say can’t I have both.  The short answer is, you can but you have to plan for it.

In computing terms programmers have the choice of which to use but the question is how they choose for your business.



Everything in a computer is a switch that is on or off.  A “binary digit” or bit is one switch which is on or off.  They may be tiny tiny switches but they’re there.

A transistor has three connections.  Two in and one out.  This is the basic building block of a microchip or chip.

Using electricity and the power of magnetism you can knock switches on and off very quickly and influence transistors to give different results.  To save time and your brain from melting, turning groups of swtiches and transistors on and off can be grouped up into an instruction.


Instruction Sets, the IS of RISC and CISC

Instructions sets are the instructions a chip can perform. 

Visualise the remote control for your TV there is a set amount of things it can do.  It ain’t going to boil the kettle for you but with some programming, it could.  It can make the volume go up and down easily though.

Instruction sets boil down to being a collection of one word instructions such as add, divide etc.

When manufacturers build chips they decide how many instructions the chip can do.  The more instructions the more it costs.


The biggest CISC chip in your computer is the CPU (Central Processing Unit).  The CPU is the government building of the city using the analogy of how your computer works.

The more your government knows how to do the better? Right?  Just go to the CPU and it will know how to handle that.


Building blocks


RISC and CISC both come from the same humble beginnings.

As a programmer you haven’t the time to remember which switches to turn on and off.  So the on and off are grouped into words.  You can issue the one word command and the chip performs that instruction.  Like pressing a button on the remote.

Computers do some things very well like maths.  Let’s take a quick look at how a computer does addition.

  • Start with loading value 1 from memory (Memory load)
  • Now load value 2 load from memory (Memory load)
  • Add them together (Arithmetic operation)
  • Store that result back in memory (Memory store)

So to do addition is 4 instructions.  This level of programming is called Assembly and you can learn more here.


Building complexity

Wouldn’t it be simpler for the programmer if it was one instruction, just “Add” like on your calculator.  Wouldn’t that save time?  So this is what chip builders did and how we got calculators.

Imagine the basics of multiplication 6 x 7 = 42.  You could look at this as 7 + 7 + 7 + 7 + 7 + 7

So you could build a multiplication function using just add functions. The programmers just need to know the instruction “multiply” and not care how the system got there.

New versions of computer chips had basic instructions like “add” but also introduced new instructions like “multiply”.

Over time the instructions became more and more complex making the programmers lives easier.



Complex Instruction Set Computing is a philosophy, an approach.

Again using the city analogy when you hit the power button on your computer or TV you effectively issue the command “start the city”.  You don’t care how it works just that it works.

Complex instructions are the joining up of simple instructions so you can call the one complex instruction and it knows how to do all the simple ones.

If you want to read every instruction that a processor can do you can actually read the manuals. Intel want programmers to know all the instructions they have on their chips.

This approach makes programming much easier as you just leave the CPU to it.

There’s one problem, there’s only one CPU, one government building.  It can get busy.  While it is busy, you have to wait.

People and business don’t like waiting especially if they have a very simple request which the user knows is very fast.  Why should I wait for simple things?  All the while not knowing what else is going on in the CPU.  Who gets priority?



Reduced Instruction Set Computing adopts a different approach.

Instead of making complex one stop shop chips what if you could have much simpler chips but a lot more of them.  They don’t do as much but there’s more of them.

Now you have 100 departments to handle simple requests instead of having to wait for one department to become free.

The instructions are exactly the same as the complex chip but they don’t have the very complicated instructions favouring the approach of simpler ones is better.

The Minions, maybe not as bright but there are many of them.


Intel are the world leaders in CISC chips whilst AMD and NVIDIA are RISC leaders.

RISC and CISC, a quick history and their challenges

You’re only as a strong as your weakest link

In building computers as things get faster and more and more work is done, whatever is the slowest component is the one that gets the attention.

With CISC building adding more and more ability computers seem to require more and more resources whilst not adding much more operational value to most users.

A mobile phone has more processing power than the whole of what NASA used to put a man on the moon.  Do you need it? Do you use it?


One of the most intensive uses of a computers resources are computer games.  3D (Three dimensional) graphics use a lot of maths.  Calculating the trajectory of laser beams and working out explosions whilst drawing fire and water splashes requires a lot of basic maths to draw the pictures in the game.  It’s drawing triangles ( polygons ) but there is a lot of them to draw! For gaming the CPU becomes a very serious bottle neck.  It doesn’t need to be weighed down with all this simple maths…



So instead a compromise was found.  Let the video card, the Graphics Processing Unit (GPU) handle the simple maths en mass.

Your CPU is CISC and your GPU is RISC.

You need to make sure your computer programmers know this because the default position will be run it all through the CPU.

For this challenge RISC approaches are better for gaming than CISC approaches.  Companies like NVIDIA make RISC based video cards.

An Nvidia GeForce GTX 1080 Ti has 3584 cores where an Intel i7 processor has 8 cores.


Who do you choose for your CPU?

As the power of personal computing (PC) built through the 80s into the new millennium companies went head to head to win the market with the two different strategies.

Advanced Micro Devices (AMD) and Intel, two american companies fought to have their chips at the core of computers.

The war rages on with two different philosophies.  Intel are predominantly CISC and AMD are more RISC.  They both work but have different approaches and I am generalising for ease of discussion.


In the earlier days it was easier for programmers to use CISC chips.  They reduced time to develop software as the programmer could issue one command to the CISC and the job was done where the RISC programmer had a lot more work ahead and that work repeated.  This ease of use and improved accuracy meant that the concept of reusability became paramount.

The debate continues though Intel does have market dominance.

There are many more chip makers.  Some did better than others in this ongoing war including Motorola, IBM, SPARC and Apple to name but a few.



The laws of physics have very real influence when you’re building very complicated micro chips.

Moore’s Law is not really a law but an observation.  Every year the amount of transistors on a chip doubles.  This is just a trend rather than a law.  However it’s kinda slowing down now because of other real world physics laws.



To save space in a computer box the idea of putting the chip in the same physical space meant saving a lot of room.  Stack the departments one on top of the other and build up instead of out.  April 2005 saw the release of the Intel Pentium chip with two cores in the same chip.

Electricity (electrons) going through a transistor generate heat as a result chips generate heat.  The more transistors the more heat they generate.  When you stack them on top of each other getting the heat out is a challenge.

Heat is the biggest challenge as of 2018.  Silicon (as in sand) is the chemical on which micro chips are built.  If it gets too hot it melts.  Goodbye chip.  So the limits of physics are very real for chip builders.  The material sciences are experimenting to find replacements which do better than silicon.

Modern applications

CUDA which has become GPGPU

CUDA “Compute Unified Device Architecture” was later renamed GPGPU “General-Purpose computing on Graphics Processing Units”.

What if a computer programmer could run what up to now has gone into the CPU to be performed on 8 cores, instead be run on the RISC 3584 cores of the GPU?  Admittedly the task run would need to be based on simple maths but surely would be done faster.

In 2013 I attempted this work with a database command called “sort” for my Masters thesis.

Take a simple task for a computer to do like sorting.  Instead of forcing all the data through a CPU let the GPGPU process it all.  It worked and worked a lot faster.  Over 100 times faster on a MySQL database SQL “Order by” sort.

So now the challenge is up to programmers to make use of the incredible power in the GPGPUs.  Yes it’s more programming work, RISC is more work, but the speed gains can justify the effort and if the solutions are designed with reusability in mind… wow!  Lots of projects are emerging for the area.


If there’s anything in this article you’d like to chat to me about you can contact me here or on social media.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.