Computer Science Principles Lesson 3.17 - Algorithmic Efficiency

What Does “Algorithmic Efficiency” Mean?

  • Algorithmic efficiency is important in coding.
  • It means how fast an algorithm is and how much memory an algorithm uses.
  • More efficient = quicker and less memory used.

Demonstration: How efficiently can you get from point A to B?

click the buttons to turn them red, try to see different paths you can take from left to right!






Demonstration 2: Same output, different time.

These two buttons get to the same endpoint, but requre different amounts of steps to reach.

What’s So Important About Algorithmic Efficiency?

  • Processing power and time are limited.
  • Efficient algorithms finish quickly and use little RAM.
  • Inefficient algorithms can take a long time and use lots of resources.

How do We Measure Algorithmic Efficiency?

  • Algorithmic efficiency is about how much time and memory an algorithm uses.
  • Time complexity is measured using Big-O Notation.
  • Big-O Notation describes how the number of steps grows as the input size increases.
  • It helps us compare different algorithms and predict which will be faster for large inputs.
  • Big-O doesn’t give the exact number of steps, but shows how quickly work increases as problems get bigger.
  • Common Big-O examples: O(1) (constant), O(n) (linear), O(n^2) (quadratic), O(2^n) (exponential).
  • Memory usage is often related to time complexity and can also be described with Big-O.

What do these Big-O types mean?

  • O(1) Constant: The algorithm always takes the same number of steps, no matter how big the input is. Example: Checking if a number is even.
  • O(n) Linear: The number of steps grows directly with the input size. Example: Looking at every item in a list.
  • O(n²) Quadratic: The steps grow much faster—if you double the input, the work is four times as much. Example: Comparing every item in a list to every other item.
  • O(2ⁿ) Exponential: The steps double with every extra input—very quickly becomes huge! Example: Trying every possible combination in a puzzle.

Example: Designing Algorithms and Measuring Their Efficiency Using Python

Let’s create an algorithm and see how algorithmically efficient it is using Big-O notation.

# Define some simple variables
index = 0
amount = 10

# The algorithm we'll be analyzing:
while index < amount:
    print("I love Python!")
    index += 1
  • The code repeats while ‘index’ is less than ‘amount’.
  • Each time, ‘index’ increases by 1.
  • The loop runs ‘amount’ times, so the computer does one extra step for each extra value of ‘amount’.
  • This is O(n) time complexity: n steps for n inputs.
  • ‘print()’ and ‘index += 1’ are O(1) (one step each).
  • Overall, this algorithm is O(n) time complexity.

Example: Designing Algorithms and Measuring Their Efficiency Using Javascript

Let’s create an algorithm and see how algorithmically efficient it is using Big-O notation, however this time we’ll use Javascript.

%%js
// Define variables
const amount = 8;
let outerIndex = 0;
let innerIndex = 0;
let overallIndex = 0;

// The algorithm we'll be analyzing:
while (outerIndex < amount) { //Outer loop
    innerIndex = 0;
    while (innerIndex < amount) { //Inner loop
        overallIndex++;
        console.log(`Iteration #${overallIndex}`);
        innerIndex++;
    }
    outerIndex++;
}
<IPython.core.display.Javascript object>
  • The outer loop runs ‘amount’ times.
  • The inner loop also runs ‘amount’ times for each outer loop.
  • The whole thing runs ‘amount’ times, ‘amount’ times!
  • Outer loop: O(n), inner loop: O(n).
  • Multiply them: O(n * n) = O(n^2).
  • So, this algorithm’s time complexity is O(n^2).