Exploring Time Complexity of Dijkstra’s Algorithm in Python

This article is a complementary resource to the DSA with Python course.

Exploring Time Complexity of Dijkstra’s Algorithm in Python

Dijkstra's algorithm is a widely used algorithm for finding the shortest path between nodes in a weighted graph.

In this article, we will delve into the time complexity of Dijkstra's algorithm.


How Does Dijkstra's Algorithm Work?

Dijkstra's algorithm:

  1. Starts from a source node.
  2. Selects the node with the minimum distance that has not been visited yet.
  3. Updates the distances of its neighbors.
  4. Adds the neighbors to a priority queue.
  5. Repeats this process until all nodes have been visited, finding the shortest path to each node from the source node.
# Unoptimized algorithm

def dijkstra(graph, start, end):
    shortest_distance = { vertex : float('infinity') for vertex in graph}
    shortest_distance[start] = 0
    
    # Define predecessor
    predecessor = { vertex : None for vertex in graph}
 
    # Define unvisited vertices
    unvisited_vertices = list(graph)
 
    while unvisited_vertices:

        current_vertex = min(unvisited_vertices, key=lambda vertex: shortest_distance[vertex])
        for neighbor, weight in graph[current_vertex].items():
            if shortest_distance[current_vertex] + weight < shortest_distance[neighbor]:
                shortest_distance[neighbor] = shortest_distance[current_vertex] + weight
                predecessor[neighbor] = current_vertex
 
        unvisited_vertices.remove(current_vertex)
    path = []
    while end:
        path.append(end)
        end = predecessor[end]
        path.reverse()
    return shortest_distance, path
 
# Function call
start, end = 'A', 'F'
distance, path = dijkstra(graph, start, end)
print(f"Shortest path from {start} to {end}: {path}")
print(f"Shortest distance from {start} to {end}: {distance[end]}")

Time Complexity (Unoptimized Solution)

The above implementation is not the optimal solution.

The time complexity of the above program can be divided into following components:

1. Main Loop

The main loop iterates over all unvisited vertices, finding the vertex with the smallest distance. For V vertices, a single iteration takes O(V) time.

Since each of the V vertices needs to be visited in the worst case, the loop runs for V times.

Thus, by themselves, the loop takes O(V) * O(V) = <katex>O(V^{2})</katex> time complexity.

Additionally, the algorithm updates the shortest distances of its neighbors, which takes O(E) time on average (where E is the number of edges). We get this particular complexity because each edge is visited once.

Therefore, the total time complexity of the main loop is <katex>O(V^{2} + E)</katex>.

2. Path Reconstruction

The algorithm reconstructs the shortest path by tracing back the predecessors from the end vertex to the start vertex, which takes O(V) time for V vertices.

Thus, the time complexity of the given Dijkstra's algorithm implementation is <katex>O(V^{2} + E)</katex> because the main loop dominates the total computing cost.

Note: Dijkstra's algorithm is used for graphs without cycles. So, the number of edges is very small. Therefore, the impact of E is generally ignored for time complexity and is often written as <katex>O(V^{2})</katex>.


Optimized Solution

Fortunately, our implementation can be optimized by using a priority queue to store the unvisited vertices.

def dijkstra(graph, start, end):
    # Initialization
    shortest_distance = {vertex: float('infinity') for vertex in graph}
    shortest_distance[start] = 0

    predecessor = {vertex: None for vertex in graph}

    # Priority Queue
    unvisited_vertices = [(0, start)]

    # Priority Queue Operation
    while unvisited_vertices:
        current_distance, current_vertex = heapq.heappop(unvisited_vertices)
        if current_vertex == end:
            break

        # Update the edges
        for neighbor, weight in graph[current_vertex].items():
            distance = current_distance + weight
        	if distance < shortest_distance[neighbor]:
                shortest_distance[neighbor] = distance
                predecessor[neighbor] = current_vertex
                heapq.heappush(unvisited_vertices, (distance, neighbor))

	# reconstruct the path from end to start by backtracking from the end using the predecessor dictionary
    path = []
    while end:
        path.append(end)
        end = predecessor[end]
        path.reverse()
return shortest_distance, path

Time Complexity (Optimized Solution)

The time complexity of the above program can be divided into following components:

1. Initialization

The initialization of the shortest_distance and predecessor dictionaries takes O(V) time.

2. Priority Queue Operations

The priority queue operations, including insertion and deletion, take O(logV) time per operation. Since we perform these operations V times for each vertex and E times for each edge, the total time complexity is O((V+E) logV).

3. Edge Updates

The edge updates take O(E) time, where E is the number of edges. This is because we iterate over all edges in the graph and update the distances and predecessors of the vertices.

Overall Time Complexity

When you sum up all the components, you get:

Step Time Complexity
Initialization O(V)
Priority Queue Operations O((V+E) logV)
Edge Updates O(E)

Thus, the total time complexity of Dijkstra's algorithm is:

O(V) + O((V+E)log⁡V) + O(E) 

Since O((V+E) logV) dominates O(V) and O(E) for large graphs (particularly when E is large), the overall time complexity is usually simplified to O((V+E) log⁡V).

Overall Time Complexity: O((V+E) log⁡V)