Python Scripts For Beginners: Web Development

Collection of 100 Python scripts for beginners related to Web Development, each designed to help with fundamental tasks and provide useful examples.

100 Python Scripts For Beginners: Web Development

1. Simple HTTP Server

import http.server
import socketserver

PORT = 8000

Handler = http.server.SimpleHTTPRequestHandler

with socketserver.TCPServer(("", PORT), Handler) as httpd:
print("serving at port", PORT)
httpd.serve_forever()

Starts a simple HTTP server that serves files from the current directory. This is useful for quickly testing static HTML files or other resources on your local machine.


2. Fetch Web Page Content

import requests

response = requests.get('https://example.com')
print(response.text)

Uses the requests library to fetch the HTML content of a specified web page and prints it. This is useful for downloading and analyzing web page data.


3. Web Scraping with BeautifulSoup

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.title.text)

Fetches the HTML content of a web page and parses it using BeautifulSoup to extract the page title. This is useful for extracting specific information from web pages.


4. Extract Links from a Web Page

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))

Extracts and prints all hyperlinks (<a> tags) from a web page. This script is helpful for gathering all the links available on a web page.


5. Submit Form Data

import requests

payload = {'username': 'user', 'password': 'pass'}
response = requests.post('https://example.com/login', data=payload)
print(response.text)

Sends a POST request with form data to a web server. This is used for submitting data through a form on a website.


6. Check if Web Page is Up

import requests

response = requests.get('https://example.com')
if response.status_code == 200:
print("Web page is up")
else:
print("Web page is down")

Checks the status of a web page by verifying if the HTTP status code is 200, indicating that the page is accessible.


7. Download File from URL

import requests

url = 'https://example.com/file.zip'
response = requests.get(url, stream=True)
with open('file.zip', 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)

Downloads a file from a given URL and saves it locally. Useful for handling file downloads in web scraping or automation tasks.


8. Upload File to Server

import requests

files = {'file': open('file.txt', 'rb')}
response = requests.post('https://example.com/upload', files=files)
print(response.text)

Uploads a file to a server using a POST request. Useful for automating file uploads to web services.


9. Parse JSON Response

import requests

response = requests.get('https://api.example.com/data')
data = response.json()
print(data)

Fetches JSON data from an API and parses it. Useful for working with REST APIs that return data in JSON format.


10. Simple Web Scraper with Selenium

from selenium import webdriver

driver = webdriver.Chrome()
driver.get('https://example.com')
print(driver.title)
driver.quit()

Uses Selenium to open a web page and print the title. Selenium is useful for automating web browsers and interacting with dynamic web pages.


11. Extract Text from HTML

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.get_text())

Extracts all the text content from an HTML page, stripping out the HTML tags.


12. Check Response Time

import requests
import time

start_time = time.time()
response = requests.get('https://example.com')
end_time = time.time()
print(f"Response time: {end_time - start_time} seconds")

Measures the time it takes to receive a response from a web server, useful for performance testing.


13. Handle HTTP Errors

import requests

try:
response = requests.get('https://example.com')
response.raise_for_status()
except requests.exceptions.HTTPError as err:
print(f"HTTP error occurred: {err}")

Handles HTTP errors by catching exceptions and printing error messages.


14. Scrape Multiple Pages

from bs4 import BeautifulSoup
import requests

urls = ['https://example.com/page1', 'https://example.com/page2']
for url in urls:
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.title.text)

Scrapes multiple web pages by iterating through a list of URLs.


15. Monitor Web Page for Changes

import requests
import time

url = 'https://example.com'
previous_content = requests.get(url).text

while True:
time.sleep(60) # Check every minute
current_content = requests.get(url).text
if current_content != previous_content:
print("Web page content has changed!")
previous_content = current_content

Monitors a web page for changes by periodically checking its content.


16. Extract Data from Tables

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com/table')
soup = BeautifulSoup(response.text, 'html.parser')
table = soup.find('table')
for row in table.find_all('tr'):
cells = row.find_all('td')
print([cell.text for cell in cells])

Extracts data from HTML tables, useful for scraping tabular data from web pages.


17. Web Scraping with Requests-HTML

from requests_html import HTMLSession

session = HTMLSession()
response = session.get('https://example.com')
print(response.html.title.text)

Uses the requests-html library to fetch and parse HTML content, providing a simpler API for web scraping tasks.


18. Send GET Request with Headers

import requests

headers = {'User-Agent': 'Mozilla/5.0'}
response = requests.get('https://example.com', headers=headers)
print(response.text)

Sends a GET request with custom headers, useful for mimicking different user agents or adding other HTTP headers.


19. Automate Form Submission

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.get('https://example.com/login')
username = driver.find_element_by_name('username')
password = driver.find_element_by_name('password')
username.send_keys('user')
password.send_keys('pass')
password.send_keys(Keys.RETURN)
driver.quit()

Automates the process of filling out and submitting a web form using Selenium.


20. Extract Meta Tags

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
for meta in soup.find_all('meta'):
print(meta.get('name'), meta.get('content'))

Extracts and prints meta tags from a web page, including metadata such as description and keywords.


21. Scrape Images from a Web Page

from bs4 import BeautifulSoup
import requests
import os

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

if not os.path.exists('images'):
os.makedirs('images')

for img in img_tags:
img_url = img.get('src')
if img_url:
img_data = requests.get(img_url).content
img_name = os.path.join('images', img_url.split('/')[-1])
with open(img_name, 'wb') as f:
f.write(img_data)

Scrapes and downloads all images from a web page, saving them to a local directory.


22. Generate Web Page with Flask

from flask import Flask

app = Flask(__name__)

@app.route('/')
def home():
return "Hello, Flask!"

if __name__ == "__main__":
app.run(debug=True)

Creates a basic web server using Flask that returns “Hello, Flask!” when accessed.


23. Create API with Flask

from flask import Flask, jsonify

app = Flask(__name__)

@app.route('/api/data', methods=['GET'])
def get_data():
data = {'name': 'John', 'age': 30}
return jsonify(data)

if __name__ == "__main__":
app.run(debug=True)

Creates a simple API endpoint using Flask that returns JSON data.


24. Handle URL Parameters with Flask

from flask import Flask, request

app = Flask(__name__)

@app.route('/greet')
def greet():
name = request.args.get('name', 'Guest')
return f"Hello, {name}!"

if __name__ == "__main__":
app.run(debug=True)

Handles URL parameters in a Flask route, providing a personalized greeting based on a query parameter.


25. Web Scraping with Scrapy

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = ['https://quotes.toscrape.com']

def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}

Uses the Scrapy framework to scrape quotes from a web page, extracting text, authors, and tags.


26. Perform HTTP Basic Authentication

import requests
from requests.auth import HTTPBasicAuth

response = requests.get('https://example.com', auth=HTTPBasicAuth('user', 'pass'))
print(response.text)

Performs HTTP Basic Authentication when making a request, useful for accessing secured resources.


27. Handle Redirects

import requests

response = requests.get('https://example.com', allow_redirects=True)
print(response.url)

Follows redirects when making an HTTP request and prints the final URL.


28. Parse URL Components

from urllib.parse import urlparse

url = 'https://example.com/path?query=param#fragment'
parsed_url = urlparse(url)
print(f"Scheme: {parsed_url.scheme}")
print(f"Netloc: {parsed_url.netloc}")
print(f"Path: {parsed_url.path}")
print(f"Query: {parsed_url.query}")
print(f"Fragment: {parsed_url.fragment}")

Parses and extracts components from a URL, such as scheme, network location, path, query, and fragment.


29. Check Content-Type Header

import requests

response = requests.get('https://example.com')
print(response.headers.get('Content-Type'))

Checks and prints the Content-Type header from an HTTP response to determine the type of content received.


30. Rate Limiting Requests

import requests
import time

def fetch(url):
response = requests.get(url)
print(response.text)
time.sleep(2) # Rate limit: 1 request per 2 seconds

urls = ['https://example.com/page1', 'https://example.com/page2']
for url in urls:
fetch(url)

Implements basic rate limiting by adding a delay between requests to avoid overwhelming the server.


31. Web Scraping with Regex

import re
import requests

response = requests.get('https://example.com')
matches = re.findall(r'http[s]?://\S+', response.text)
print(matches)

Uses regular expressions to find and print all URLs in the HTML content of a web page.


32. Crawl Web Pages

import requests
from bs4 import BeautifulSoup

def crawl(url, depth):
if depth == 0:
return
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
print(url)
for link in soup.find_all('a'):
next_url = link.get('href')
if next_url and next_url.startswith('http'):
crawl(next_url, depth - 1)

crawl('https://example.com', 2)

Crawls a web page and follows links up to a specified depth, useful for exploring related pages.


33. Fetch API Data with Authentication

import requests
from requests.auth import HTTPBearerAuth

token = 'your_token_here'
headers = {'Authorization': f'Bearer {token}'}
response = requests.get('https://api.example.com/data', headers=headers)
print(response.json())

Fetches data from an API using Bearer token authentication, commonly used with OAuth 2.0.


34. Web Scraping with Selenium for Dynamic Content

from selenium import webdriver
from selenium.webdriver.common.by import By

driver = webdriver.Chrome()
driver.get('https://example.com')
element = driver.find_element(By.ID, 'dynamic-element')
print(element.text)
driver.quit()

Uses Selenium to interact with and extract information from dynamically loaded content on a web page.


35. Basic Form Validation

import requests

data = {'email': 'user@example.com', 'password': 'password123'}
response = requests.post('https://example.com/form', data=data)
print(response.text)

Posts form data to a server, which can include basic form validation and submission.


36. Parse HTML with lxml

from lxml import html
import requests

response = requests.get('https://example.com')
tree = html.fromstring(response.content)
title = tree.xpath('//title/text()')[0]
print(title)

Uses lxml to parse HTML and extract information, such as the page title, from a web page.


37. Download Image

import requests

url = 'https://example.com/image.jpg'
response = requests.get(url)
with open('image.jpg', 'wb') as f:
f.write(response.content)

Downloads and saves an image from a URL to the local file system.


38. Check Web Page Load Time

import requests
import time

start_time = time.time()
response = requests.get('https://example.com')
end_time = time.time()
load_time = end_time - start_time
print(f"Page load time: {load_time} seconds")

Measures the time it takes to load a web page.


39. Post JSON Data

import requests
import json

data = {'key': 'value'}
response = requests.post('https://example.com/api', json=data)
print(response.json())

Sends JSON data to an API endpoint using a POST request.


40. Set Up Flask with SQLAlchemy

from flask import Flask
from flask_sqlalchemy import SQLAlchemy

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///example.db'
db = SQLAlchemy(app)

class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)

@app.route('/')
def home():
return "Database setup complete!"

if __name__ == "__main__":
app.run(debug=True)

Sets up a Flask application with SQLAlchemy for database interactions.


41. Use Flask Session

from flask import Flask, session, redirect, url_for

app = Flask(__name__)
app.secret_key = 'supersecretkey'

@app.route('/')
def index():
if 'username' in session:
return f'Logged in as {session["username"]}'
return 'You are not logged in'

@app.route('/login/<username>')
def login(username):
session['username'] = username
return redirect(url_for('index'))

if __name__ == "__main__":
app.run(debug=True)

Uses Flask sessions to manage user login state.


42. Simple Authentication with Flask

from flask import Flask, request, jsonify

app = Flask(__name__)

users = {'admin': 'password'}

@app.route('/login', methods=['POST'])
def login():
data = request.json
if data['username'] in users and users[data['username']] == data['password']:
return jsonify({'message': 'Login successful'})
return jsonify({'message': 'Invalid credentials'}), 401

if __name__ == "__main__":
app.run(debug=True)

Implements a simple authentication mechanism in Flask.


43. Use Flask to Render Templates

from flask import Flask, render_template

app = Flask(__name__)

@app.route('/')
def home():
return render_template('index.html')

if __name__ == "__main__":
app.run(debug=True)

Uses Flask to render an HTML template. Requires a templates/index.html file.


44. Basic Authentication with Requests

import requests
from requests.auth import HTTPBasicAuth

response = requests.get('https://example.com', auth=HTTPBasicAuth('user', 'pass'))
print(response.text)

Performs HTTP Basic Authentication using the requests library.


45. Download Web Page as PDF

import pdfkit

pdfkit.from_url('https://example.com', 'output.pdf')

Converts a web page to a PDF file using pdfkit. Requires wkhtmltopdf.


46. Schedule Web Scraping with Celery

from celery import Celery
import requests

app = Celery('tasks', broker='pyamqp://guest@localhost//')

@app.task
def scrape_website(url):
response = requests.get(url)
print(response.text)

Sets up a task with Celery to perform web scraping at scheduled intervals.


47. Handle Cookies with Requests

import requests

response = requests.get('https://example.com', cookies={'cookie_name': 'cookie_value'})
print(response.cookies)

Handles HTTP cookies when making a request.


48. Web Scraping with API Rate Limits

import requests
import time

url = 'https://api.example.com/data'
headers = {'Authorization': 'Bearer token'}

for _ in range(5): #

48. Web Scraping with API Rate Limits

import requests
import time

url = 'https://api.example.com/data'
headers = {'Authorization': 'Bearer token'}

for _ in range(5): # Make 5 requests
response = requests.get(url, headers=headers)
print(response.json())
time.sleep(2) # Respect rate limit

Handles API rate limits by spacing out requests with a delay, preventing overloading the server.


49. Monitor API Health

import requests

def check_health(api_url):
response = requests.get(api_url + '/health')
if response.status_code == 200:
print("API is healthy")
else:
print("API is not healthy")

check_health('https://api.example.com')

Checks the health of an API by sending a request to a health endpoint and interpreting the response.


50. Send Email Using SMTP

import smtplib
from email.mime.text import MIMEText

msg = MIMEText('This is the body of the email.')
msg['Subject'] = 'Subject here'
msg['From'] = 'you@example.com'
msg['To'] = 'recipient@example.com'

with smtplib.SMTP('smtp.example.com', 587) as server:
server.starttls()
server.login('you@example.com', 'password')
server.send_message(msg)

Sends an email through an SMTP server, including authentication and encryption.


51. Parse XML with lxml

from lxml import etree
import requests

response = requests.get('https://example.com/data.xml')
tree = etree.fromstring(response.content)
print(tree.find('.//title').text)

Uses lxml to parse XML data from a URL and extract specific elements.


52. Use Flask Blueprints

from flask import Flask, Blueprint

app = Flask(__name__)

home_bp = Blueprint('home', __name__)

@home_bp.route('/')
def home():
return "Welcome to the Home page!"

app.register_blueprint(home_bp)

if __name__ == "__main__":
app.run(debug=True)

Organizes Flask routes using Blueprints, which helps modularize code and manage complex applications.


53. Extract Data from JSON File

import json

with open('data.json') as f:
data = json.load(f)
print(data)

Reads and parses data from a JSON file into a Python dictionary.


54. Upload File with Flask

from flask import Flask, request

app = Flask(__name__)

@app.route('/upload', methods=['POST'])
def upload_file():
if 'file' not in request.files:
return 'No file part'
file = request.files['file']
if file.filename == '':
return 'No selected file'
file.save(f"./uploads/{file.filename}")
return 'File uploaded successfully!'

if __name__ == "__main__":
app.run(debug=True)

Handles file uploads in a Flask application and saves the file to a specified directory.


55. Web Scraping with XPath

from lxml import html
import requests

response = requests.get('https://example.com')
tree = html.fromstring(response.content)
titles = tree.xpath('//title/text()')
print(titles)

Extracts data from a web page using XPath expressions, which provide a way to navigate XML/HTML documents.


56. JSON to CSV Conversion

import json
import csv

with open('data.json') as json_file, open('data.csv', 'w', newline='') as csv_file:
data = json.load(json_file)
writer = csv.writer(csv_file)
writer.writerow(data[0].keys()) # Write header
for row in data:
writer.writerow(row.values())

Converts JSON data into CSV format, facilitating data manipulation and analysis in spreadsheet applications.


57. Monitor Website Uptime

import requests
import time

def check_uptime(url):
try:
response = requests.get(url)
if response.status_code == 200:
print(f"{url} is up")
else:
print(f"{url} is down")
except requests.RequestException as e:
print(f"Error: {e}")

while True:
check_uptime('https://example.com')
time.sleep(60) # Check every minute

Monitors the availability of a website by repeatedly checking its status and printing the result.


58. Fetch Web Page Metadata

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
metadata = {
'description': soup.find('meta', attrs={'name': 'description'}).get('content'),
'keywords': soup.find('meta', attrs={'name': 'keywords'}).get('content')
}
print(metadata)

Extracts meta tags like description and keywords from a web page to understand its content.


59. Web Scraping with PyQuery

from pyquery import PyQuery as pq
import requests

response = requests.get('https://example.com')
d = pq(response.text)
print(d('title').text())

Uses pyquery, a jQuery-like library for Python, to scrape and interact with HTML content.


60. Dynamic Content Interaction with Selenium

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.get('https://example.com')
search_box = driver.find_element(By.NAME, 'q')
search_box.send_keys('Python')
search_box.send_keys(Keys.RETURN)
print(driver.title)
driver.quit()

Uses Selenium to automate browser interactions, such as searching and retrieving page titles.


61. Web Scraping with Requests and Regex

import re
import requests

response = requests.get('https://example.com')
links = re.findall(r'href=["\'](http[s]?://.*?)(?=["\'])', response.text)
print(links)

Extracts URLs from the href attributes in an HTML document using regular expressions.


62. Paginate API Results

import requests

base_url = 'https://api.example.com/data'
page = 1
while True:
response = requests.get(f'{base_url}?page={page}')
data = response.json()
if not data:
break
print(data)
page += 1

Paginate through results from an API by requesting successive pages until no more data is returned.


63. Data Validation in Flask Forms

from flask import Flask, request, render_template_string

app = Flask(__name__)

@app.route('/form', methods=['GET', 'POST'])
def form():
if request.method == 'POST':
username = request.form['username']
if not username:
return 'Username is required!'
return f'Hello, {username}!'
return '''
<form method="post">
Username: <input type="text" name="username">
<input type="submit" value="Submit">
</form>
'''

if __name__ == "__main__":
app.run(debug=True)

Validates user input from a form and handles submission in a Flask application.


64. Redirect User in Flask

from flask import Flask, redirect, url_for

app = Flask(__name__)

@app.route('/')
def home():
return redirect(url_for('about'))

@app.route('/about')
def about():
return 'This is the About page!'

if __name__ == "__main__":
app.run(debug=True)

Redirects users from one route to another in Flask using URL building.


65. Use Flask to Handle JSON Requests

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/json', methods=['POST'])
def json_request():
data = request.json
return jsonify({'received': data})

if __name__ == "__main__":
app.run(debug=True)

Handles and responds to JSON data in POST requests using Flask.


66. API Rate Limiting with Flask

from flask import Flask, request
import time

app = Flask(__name__)
rate_limit = 5 # requests per minute
last_request = 0

@app.route('/api')
def api():
global last_request
current_time = time.time()
if current_time - last_request < 60 / rate_limit:
return 'Rate limit exceeded', 429
last_request = current_time
return 'API response'

if __name__ == "__main__":
app.run(debug=True)

Implements rate limiting in a Flask API to manage request frequency and prevent abuse.


67. Send HTTP PUT Request

import requests

data = {'key': 'value'}
response = requests.put('https://example.com/api', json=data)
print(response.text)

Sends an HTTP PUT request to update resources on a server.


68. Set Custom User-Agent

import requests

headers = {'User-Agent': 'CustomUserAgent/1.0'}
response = requests.get('https://example.com', headers=headers)
print(response.text)

Sets a custom User-Agent header in HTTP requests to mimic different browsers or applications.


69. Handle URL Encoding

from urllib.parse import urlencode

params = {'name': 'John Doe', 'age': 30}
encoded_params = urlencode(params)
print(encoded_params)

Encodes parameters into a URL query string format for safe transmission over HTTP.


70. Web Scraping with Requests and Pandas

import pandas as pd
import requests

response = requests.get('https://example.com/data')
df = pd.read_html(response.text)[0]
print(df.head())

Extracts tabular data from a web page into a Pandas DataFrame for data analysis.


71. Setup Flask with CORS

from flask import Flask
from flask_cors import CORS

app = Flask(__name__)
CORS(app)

@app.route('/')
def home():
return "CORS enabled!"

if __name__ == "__main__":
app.run(debug=True)

Enables Cross-Origin Resource Sharing (CORS) to allow your Flask application to handle requests from different origins.


72. Basic Flask Unit Testing

import unittest
from flask import Flask

app = Flask(__name__)

@app.route('/')
def home():
return 'Hello, World!'

class FlaskTestCase(unittest.TestCase):
def setUp(self):
self.app = app.test_client()
self.app.testing = True

def test_home(self):
response = self.app.get('/')
self.assertEqual(response.data, b'Hello, World!')

if __name__ == "__main__":
unittest.main()

Sets up basic unit tests for a Flask application using the unittest framework.


73. Flask Blueprints with Static Files

from flask import Flask, Blueprint, send_from_directory

app = Flask(__name__)

blueprint = Blueprint('main', __name__, static_folder='static')

@blueprint.route('/static/<path:filename>')
def static_files(filename):
return send_from_directory(blueprint.static_folder, filename)

app.register_blueprint(blueprint)

if __name__ == "__main__":
app.run(debug=True)

Uses Flask Blueprints to serve static files from a specific directory.


74. Setup Flask with SQLAlchemy and Alembic

from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///example.db'
db = SQLAlchemy(app)
migrate = Migrate(app, db)

class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)

if __name__ == "__main__":
app.run(debug=True)

Configures Flask with SQLAlchemy for ORM and Alembic for database migrations.


75. Flask Middleware Example

from flask import Flask, request

app = Flask(__name__)

@app.before_request
def log_request_info():
print(f"Request Headers: {request.headers}")

@app.route('/')
def home():
return 'Hello, World!'

if __name__ == "__main__":
app.run(debug=True)

Uses Flask middleware to log request headers before processing each request.


76. Flask with Redis

from flask import Flask, request
import redis

app = Flask(__name__)
r = redis.Redis(host='localhost', port=6379)

@app.route('/set/<key>/<value>')
def set_key(key, value):
r.set(key, value)
return f"Key {key} set to {value}"

@app.route('/get/<key>')
def get_key(key):
value = r.get(key)
return f"Key {key} has value {value.decode()}" if value else 'Key not found'

if __name__ == "__main__":
app.run(debug=True)

Integrates Redis with Flask to store and retrieve key-value pairs.


77. Flask with JWT Authentication

from flask import Flask, request, jsonify
import jwt
import datetime

app = Flask(__name__)
app.config['SECRET_KEY'] = 'your_secret_key'

@app.route('/login', methods=['POST'])
def login():
data = request.json
token = jwt.encode({'user': data['username'], 'exp': datetime.datetime.utcnow() + datetime.timedelta(hours=1)}, app.config['SECRET_KEY'])
return jsonify({'token': token})

@app.route('/protected')
def protected():
token = request.headers.get('Authorization')
try:
jwt.decode(token, app.config['SECRET_KEY'], algorithms=['HS256'])
return 'Protected content'
except jwt.ExpiredSignatureError:
return 'Token expired', 401
except jwt.InvalidTokenError:
return 'Invalid token', 401

if __name__ == "__main__":
app.run(debug=True)

Uses JWT for user authentication and protected routes in a Flask application.


78. Flask Session with Redis

from flask import Flask, session, redirect, url_for
from flask_session import Session
import redis

app = Flask(__name__)
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_PERMANENT'] = False
app.config['SESSION_USE_SIGNER'] = True
app.config['SESSION_KEY_PREFIX'] = 'session:'
app.config['SESSION_REDIS'] = redis.StrictRedis(host='localhost', port=6379, decode_responses=True)
Session(app)

@app.route('/')
def index():
if 'username' in session:
return f'Logged in as {session["username"]}'
return 'You are not logged in'

@app.route('/login/<username>')
def login(username):
session['username'] = username
return redirect(url_for('index'))

if __name__ == "__main__":
app.run(debug=True)

Uses Redis to manage Flask sessions, enabling persistent user state across requests.


79. Web Scraping with BeautifulSoup and Requests

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
titles = [title.get_text() for title in soup.find_all('title')]
print(titles)

Scrapes and extracts titles from a web page using BeautifulSoup and requests.


80. Use Flask with WebSockets

from flask import Flask, render_template
from flask_socketio import SocketIO

app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)

@app.route('/')
def index():
return render_template('index.html')

@socketio.on('message')
def handle_message(msg):
print('Message: ' + msg)
socketio.send(msg)

if __name__ == "__main__":
socketio.run(app)

Sets up WebSocket communication with Flask using Flask-SocketIO for real-time messaging.


81. Generate PDF with ReportLab

from reportlab.lib.pagesizes import letter
from reportlab.pdfgen import canvas

c = canvas.Canvas("document.pdf", pagesize=letter)
c.drawString(100, 750, "Hello, World!")
c.save()

Creates a PDF document with custom text using the ReportLab library.


82. Scrape Data with BeautifulSoup and Requests

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
data = {
'title': soup.title.string,
'headings': [h1.get_text() for h1 in soup.find_all('h1')]
}
print(data)

Extracts page title and headings from a web page using BeautifulSoup.

83. HTML Form Submission with Requests

import requests

url = 'https://example.com/form'
data = {'name': 'John', 'email': 'john@example.com'}
response = requests.post(url, data=data)
print(response.text)

Submits form data to a URL using a POST request with the requests library.


84. Use Requests to Download Files

import requests

url = 'https://example.com/file.zip'
response = requests.get(url, stream=True)
with open('file.zip', 'wb') as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)

Downloads a file from the web and saves it locally, handling large files in chunks.


85. Simple REST API with Flask

from flask import Flask, jsonify

app = Flask(__name__)

@app.route('/api')
def api():
return jsonify({'message': 'Hello, World!'})

if __name__ == "__main__":
app.run(debug=True)

Creates a simple REST API endpoint with Flask that returns a JSON response.


86. Flask API with Query Parameters

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/api/greet')
def greet():
name = request.args.get('name', 'Guest')
return jsonify({'message': f'Hello, {name}!'})

if __name__ == "__main__":
app.run(debug=True)

Creates an API endpoint that accepts query parameters and returns a personalized greeting.


87. Basic API Authentication with Flask

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/secure-data')
def secure_data():
auth = request.headers.get('Authorization')
if auth == 'Bearer my_secret_token':
return jsonify({'data': 'Secure data'})
else:
return 'Unauthorized', 401

if __name__ == "__main__":
app.run(debug=True)

Implements basic token-based authentication for accessing secure data in a Flask API.


88. Web Scraping with Selenium

from selenium import webdriver

driver = webdriver.Chrome()
driver.get('https://example.com')
title = driver.title
print(title)
driver.quit()

Uses Selenium to open a browser, navigate to a page, and retrieve the page title.


89. Generate Random Data with Faker

from faker import Faker

fake = Faker()
for _ in range(5):
print(fake.name(), fake.address(), fake.email())

Generates and prints random names, addresses, and emails using the Faker library.


90. Handle File Uploads with Flask

from flask import Flask, request

app = Flask(__name__)

@app.route('/upload', methods=['POST'])
def upload():
if 'file' not in request.files:
return 'No file uploaded'
file = request.files['file']
file.save(f"./uploads/{file.filename}")
return 'File uploaded successfully!'

if __name__ == "__main__":
app.run(debug=True)

Handles file uploads in a Flask application and saves the file to a local directory.


91. Parse HTML with BeautifulSoup

from bs4 import BeautifulSoup
import requests

response = requests.get('https://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
print(soup.prettify())

Parses HTML content and prints it in a readable format using BeautifulSoup.


92. Basic Flask Application with Static Files

from flask import Flask, send_from_directory

app = Flask(__name__)

@app.route('/static/<path:filename>')
def static_files(filename):
return send_from_directory('static', filename)

if __name__ == "__main__":
app.run(debug=True)

Serves static files from a specified directory in a Flask application.


93. Create a Simple API with Flask and SQLite

from flask import Flask, jsonify, request
import sqlite3

app = Flask(__name__)

def get_db_connection():
conn = sqlite3.connect('database.db')
conn.row_factory = sqlite3.Row
return conn

@app.route('/items', methods=['GET'])
def get_items():
conn = get_db_connection()
items = conn.execute('SELECT * FROM items').fetchall()
conn.close()
return jsonify([dict(item) for item in items])

if __name__ == "__main__":
app.run(debug=True)

Sets up a simple API with Flask that retrieves data from an SQLite database.


94. Create an HTML Table from CSV Data

import pandas as pd

df = pd.read_csv('data.csv')
html_table = df.to_html()
with open('table.html', 'w') as f:
f.write(html_table)

Converts CSV data into an HTML table and saves it as a file.


95. Monitor HTTP Response Time

import requests
import time

url = 'https://example.com'
start_time = time.time()
response = requests.get(url)
end_time = time.time()
response_time = end_time - start_time
print(f'Response time: {response_time:.2f} seconds')

Measures the response time of a web request and prints it.


96. Implement Rate Limiting in Flask

from flask import Flask, request
import time

app = Flask(__name__)
rate_limit = 5 # requests per minute
last_request = {}

@app.route('/api')
def api():
current_ip = request.remote_addr
now = time.time()
if current_ip in last_request:
if now - last_request[current_ip] < 60 / rate_limit:
return 'Rate limit exceeded', 429
last_request[current_ip] = now
return 'API response'

if __name__ == "__main__":
app.run(debug=True)

Implements rate limiting based on IP address to control the frequency of API requests.


97. Scrape Data with Scrapy

import scrapy

class QuotesSpider(scrapy.Spider):
name = 'quotes'
start_urls = ['https://quotes.toscrape.com']

def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('small.author::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}

Uses the Scrapy framework to scrape quotes from a website.


98. Upload File to AWS S3

import boto3

s3 = boto3.client('s3')
with open('file.txt', 'rb') as file:
s3.upload_fileobj(file, 'mybucket', 'file.txt')

Uploads a file to an AWS S3 bucket using the boto3 library.


99. Use Flask with SQLAlchemy

pythonCopier le codefrom flask import Flask
from flask_sqlalchemy import SQLAlchemy

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
db = SQLAlchemy(app)

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    username = db.Column(db.String(80), unique=True, nullable=False)

@app.route('/')
def index():
    return "Database setup with SQLAlchemy!"

if __name__ == "__main__":
    app.run(debug=True)

Integrates SQLAlchemy with Flask to manage database operations.


100. Web Scraping with Requests and JSON

import requests

response = requests.get('https://example.com/api/data')
data = response.json()
print(data)

Fetches and prints JSON data from a web API using requests.

This collection of 100 Python scripts for beginners is designed to enhance your understanding of web development with practical examples.

Post Comment