How does obtaining more data impact the tradeoffs between bias and variance in machine learning models?

I am currently working on a machine learning task, and I’m interested in understanding the impact of obtaining more data on the tradeoffs involved in training models. For example, does obtaining more data always result in better model performance, or are there tradeoffs such as increased computation time or increased risk of overfitting? Can someone explain the impact of obtaining more data on these tradeoffs and provide an example in Python?

Here is what I have coded so far: