How can I optimize my text analytics pipeline by automating the preprocessing step?

Are there any techniques available that can help me streamline this process? I am particularly interested in automated techniques for handling tasks such as tokenization, stop-word removal, and stemming. Can someone provide a code snippet or example that demonstrates the use of automated preprocessing techniques in Python?
Here is what I have done so far:

This is the error that I am getting:

AttributeError: ‘CountVectorizer’ object has no attribute ‘get_feature_names’