Azure File Share SDK for Python: SMB Cloud Storage Made Easy
Sometimes you need more than object storage—you need real file shares with SMB protocol support, directory structures, and the familiar semantics of traditional file systems. Azure File Share provides exactly that: fully managed SMB file shares in the cloud that can be mounted by Windows, Linux, and macOS clients just like on-premises shares.
What This Skill Does
The azure-storage-file-share-py skill provides a Python SDK for Azure File Share, Microsoft's cloud-based SMB file storage service. It's perfect for lift-and-shift scenarios where applications expect traditional file share semantics, cross-platform file sharing (Windows, Linux, macOS), replacing or extending on-premises file servers, and storing configuration files or shared application data.
The SDK offers four client types: ShareServiceClient for account-level operations, ShareClient for managing individual file shares, ShareDirectoryClient for directory operations, and ShareFileClient for file operations. This hierarchy mirrors the structure of SMB shares: account contains shares, shares contain directories and files.
You can create shares with quotas, manage nested directory structures, upload and download files, set metadata and properties, create share snapshots for point-in-time recovery, and work with file ranges for partial updates. The SDK supports both connection string and Azure AD authentication.
Getting Started
Install the File Share SDK:
pip install azure-storage-file-share
Set your connection string (grab this from the Azure Portal under Access Keys):
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=..."
Here's a quick example creating a share, uploading a file, and downloading it:
import os
from azure.storage.fileshare import ShareServiceClient
# Connect to storage account
connection_string = os.environ["AZURE_STORAGE_CONNECTION_STRING"]
service = ShareServiceClient.from_connection_string(connection_string)
# Create file share
share = service.create_share("documents")
# Get share client
share_client = service.get_share_client("documents")
# Create directory
share_client.create_directory("reports")
# Upload file
file_client = share_client.get_file_client("reports/summary.txt")
file_client.upload_file("Q4 financial summary...")
# Download file
download = file_client.download_file()
content = download.readall()
print(f"Downloaded: {content.decode('utf-8')}")
The pattern is familiar: create a service client, get a share client, navigate directories, then upload or download files.
Key Features
SMB Protocol Support: Unlike blob storage, Azure File Share supports the SMB protocol. This means you can mount shares directly on Windows, Linux, or macOS machines and access them like local drives. Perfect for legacy applications expecting file share semantics.
Directory Hierarchies: Real directory structures, not just prefixes. Create nested directories, list contents, and delete entire directory trees. The SDK understands filesystem semantics.
Quota Management: Set quotas on shares to control maximum size. This prevents runaway storage costs and helps with capacity planning. Query usage to monitor consumption.
Share Snapshots: Create point-in-time snapshots of entire shares for backup and recovery. Snapshots are read-only and don't count against your quota until you modify the original share.
Range Operations: Upload or download specific byte ranges within files. This enables partial file updates and efficient large file handling—update just the part that changed instead of re-uploading everything.
Metadata and Properties: Attach custom metadata (key-value pairs) to shares, directories, and files. Query properties like size, last modified time, and content type.
Async Support: A complete async API enables high-performance applications to handle many concurrent operations efficiently.
Usage Examples
Create Share with Quota and Metadata:
from azure.storage.fileshare import ShareServiceClient
service = ShareServiceClient.from_connection_string(connection_string)
# Create share with 100 GB quota
metadata = {"department": "engineering", "environment": "production"}
share = service.create_share(
"engineering-files",
quota=100, # GB
metadata=metadata
)
print(f"Created share with {share.quota} GB quota")
Upload and List Files in Directory:
share_client = service.get_share_client("engineering-files")
# Create nested directories
share_client.create_directory("projects/2026/ml-pipeline")
# Upload multiple files
files_to_upload = ["config.yaml", "requirements.txt", "README.md"]
for filename in files_to_upload:
file_client = share_client.get_file_client(f"projects/2026/ml-pipeline/{filename}")
with open(filename, "rb") as f:
file_client.upload_file(f)
print(f"Uploaded {filename}")
# List directory contents
directory_client = share_client.get_directory_client("projects/2026/ml-pipeline")
for item in directory_client.list_directories_and_files():
item_type = "[DIR]" if item["is_directory"] else "[FILE]"
size = f" ({item['size']:,} bytes)" if not item["is_directory"] else ""
print(f"{item_type} {item['name']}{size}")
Download File with Streaming:
file_client = share_client.get_file_client("projects/2026/ml-pipeline/large-model.bin")
# Download in chunks to avoid memory issues
with open("local-model.bin", "wb") as f:
download = file_client.download_file()
for chunk in download.chunks():
f.write(chunk)
print(f"Downloaded {len(chunk)} bytes")
print("Download complete!")
Use Range Operations for Partial Updates:
file_client = share_client.get_file_client("data/logfile.txt")
# Create file with specific size
file_client.create_file(size=1024)
# Upload data to specific range (first 100 bytes)
data = b"Initial log entry\n"
file_client.upload_range(data=data, offset=0, length=len(data))
# Later: append more data
new_data = b"Second log entry\n"
file_client.upload_range(data=new_data, offset=len(data), length=len(new_data))
# Download specific range
download = file_client.download_file(offset=0, length=100)
partial_content = download.readall()
print(partial_content.decode('utf-8'))
Create and Use Snapshots:
share_client = service.get_share_client("documents")
# Create snapshot (backup before major changes)
snapshot = share_client.create_snapshot()
snapshot_time = snapshot["snapshot"]
print(f"Snapshot created: {snapshot_time}")
# Make changes to original share...
file_client = share_client.get_file_client("important.txt")
file_client.upload_file("Modified content")
# Access snapshot to recover original
snapshot_share_client = service.get_share_client(
"documents",
snapshot=snapshot_time
)
snapshot_file = snapshot_share_client.get_file_client("important.txt")
original_content = snapshot_file.download_file().readall()
print(f"Original content: {original_content.decode('utf-8')}")
Async Operations for High Throughput:
from azure.storage.fileshare.aio import ShareServiceClient
from azure.identity.aio import DefaultAzureCredential
import asyncio
async def upload_files_concurrently():
# Use Azure AD authentication
credential = DefaultAzureCredential()
account_url = "https://mystorage.file.core.windows.net"
service = ShareServiceClient(account_url=account_url, credential=credential)
share = service.get_share_client("uploads")
# Upload 20 files in parallel
tasks = []
for i in range(20):
file_client = share.get_file_client(f"file{i}.txt")
content = f"Content for file {i}"
tasks.append(file_client.upload_file(content))
await asyncio.gather(*tasks)
print("Uploaded 20 files concurrently")
await service.close()
await credential.close()
asyncio.run(upload_files_concurrently())
Best Practices
Use Connection Strings for Simplicity: For quick setups and development, connection strings are the easiest authentication method. Just grab them from the Azure Portal and you're ready to go.
Use Azure AD for Production: For production environments, use DefaultAzureCredential with managed identities or service principals. This is more secure than embedding connection strings and supports RBAC.
Set Appropriate Quotas: Always set share quotas when creating shares. This prevents unexpected storage costs and helps with capacity planning. Monitor usage regularly.
Create Snapshots Before Major Changes: Snapshots are cheap (you only pay for changed data) and provide instant rollback capability. Create them before deployments, migrations, or bulk operations.
Use Range Operations for Large Files: Don't re-upload entire multi-gigabyte files when only small portions change. Use upload_range() to update just the changed bytes. This saves bandwidth and time.
Stream Large Downloads: For files over 100MB, use chunks() to stream downloads instead of calling readall(). This keeps memory usage constant regardless of file size.
Close Async Clients Explicitly: Unlike sync clients, async clients should be explicitly closed with await client.close() or used in async with blocks to prevent resource leaks.
Consider SMB Mounting for Frequent Access: If your application frequently accesses files, consider mounting the share via SMB protocol instead of using the SDK. This provides better performance for file-heavy workloads.
When to Use This Skill
Perfect for:
- Lift-and-shift migrations from on-premises file servers
- Shared configuration files for distributed applications
- Cross-platform file sharing (Windows, Linux, macOS)
- Applications requiring traditional file share semantics
- Development environments needing shared storage
- Legacy applications expecting SMB protocol
- Storing persistent data for containerized applications
Use Blob Storage instead for:
- Web applications serving static content
- Object storage without directory semantics
- Applications already using blob storage APIs
- Maximum cost optimization (blob storage is cheaper)
- Content delivery with CDN integration
Use Data Lake Storage Gen2 instead for:
- Big data analytics workloads
- Hierarchical namespaces with POSIX ACLs
- Integration with Spark, Hadoop, or Databricks
- Data lake architectures
Azure File Share is ideal when you need SMB protocol support, traditional file share behavior, or are migrating existing applications. For new cloud-native applications, blob storage might be simpler and cheaper.
Related Skills
Explore the full Azure File Share SDK skill: /ai-assistant/azure-storage-file-share-py
Source
This skill is provided by Microsoft as part of the Azure SDK for Python (package: azure-storage-file-share).
Need traditional file share semantics in the cloud? Azure File Share brings SMB to Azure with the scalability and reliability you expect.