How to calculate euclidean distance using NumPy?

I’m finding Euclidean distance using the NumPy array but it’s not working the way I want.

import numpy as np
vec = np.random.rand(5, 2)
distances = np.zeros((5, 5))
for i in range(100):
    for j in range(i, 5):
        distances[i, j] = np.sqrt(((vec[i] - vec[j]) ** 2).sum())
        distances[j, i] = distances[i, j]

print(distances)

The code creates a random and zeros array of shapes (5,2). Then, it uses a nested for loop to calculate the distances between all pairs of points.

It gives the following output:

[[0.         0.31870735 0.25402376 0.29520792 0.21756367]
 [0.31870735 0.         0.34823778 0.57591941 0.28648432]
 [0.25402376 0.34823778 0.         0.52624433 0.43609023]
 [0.29520792 0.57591941 0.52624433 0.         0.32913868]
 [0.21756367 0.28648432 0.43609023 0.32913868 0.        ]]

All diagonal elements are zero which makes the code wrong. Can you identify the logical error in it?