In this lab, I explored how to provide private, highly available storage for internal company data — the kind of storage that’s not meant for public eyes. The goal was to keep things secure, redundant, and replicable, with options for access control, backup, and lifecycle management.

This storage is also used as a backup destination for the public website I configured earlier. Let’s walk through it step by step.


🧠 Scenario

The company needs secure storage for office documents, department files, and sensitive internal data. This content must remain private, survive regional outages, and support file versioning and backup for other resources — including the public website.


🛠️ Skilling Tasks

  • ✅ Create a private Azure Storage account
  • ✅ Configure geo-redundancy (GRS)
  • ✅ Create a container and restrict access
  • ✅ Generate a Shared Access Signature (SAS) for partners
  • ✅ Implement lifecycle rules to optimize cost
  • ✅ Configure object replication from a public storage account

🔧 Step-by-Step Guide

🔹 Step 1: Create a Highly Available Storage Account

  1. I went to Storage accounts in the Azure Portal and hit + Create.
  2. For the resource group, I used storagerg.
  3. I named the storage account privatebob (making sure it was globally unique).
  4. Left the rest as defaults and clicked Review + Create → then Create.

🔁 This account would serve as the internal data store and backup location for our website files.


🔹 Step 2: Set Redundancy to GRS

Because internal documents are critical, I needed regional resiliency:

  • Opened the storage account.
  • Navigated to Redundancy under the Basics section.
  • Selected Geo-redundant storage (GRS) without read-access (RA-GRS wasn’t necessary here).

Now even if one region goes down, the data remains safe.


🔹 Step 3: Create a Private Blob Container

  1. Under Data Storage, I opened Containers.
  2. Clicked + Container, named it private, and left Public access as Private (no anonymous access).
  3. Clicked Create.

Then I uploaded a file — just a small .txt doc for testing.

To confirm it was private:

  • Copied the blob’s URL from the file’s Overview.
  • Pasted it in a browser → got an access error as expected.

🔐 Perfect — this container is locked down and internal-only.


🔹 Step 4: Share a File via SAS (Restricted Access)

Later, I needed to give an external partner temporary read-only access:

  1. Opened the file inside the private container.
  2. Went to the Generate SAS tab.
  3. Set permissions to Read only.
  4. Adjusted the time window to cover the next 24 hours.
  5. Generated the Blob SAS URL, copied it, and tested it in a browser.

The file loaded ✅ — but only through that secure, tokenized link. Great way to share content without making the container public.


🔹 Step 5: Implement Lifecycle Management (Hot → Cool)

To reduce costs, I configured a lifecycle rule to move files from the hot tier to the cool tier after 30 days:

  1. Opened the storage account and went to Lifecycle Management.
  2. Created a new rule:
    • Rule name: movetocool
    • Scope: all blobs
    • Condition: Last modified > 30 days
    • Action: Move to cool storage
  3. Saved the rule.

🧊 Azure will now automatically optimize older content to a cheaper storage tier.


🔹 Step 6: Set Up Backup via Object Replication

Since this private storage account is also used to back up my public website, I set up object replication:

  1. Created a new container in the private account called backup.
  2. Went to my publicwebsitebob storage account (from https://dev.to/1suleyman/exercise-02-setting-up-public-website-storage-in-azure-297g).
  3. Opened Object replication > Create replication rule.
  4. Set:
    • Destination account: privatebob
    • Source container: public
    • Destination container: backup

Uploaded a test image to publicwebsitebob/public and waited...

📂 A few minutes later, that same file appeared in privatebob/backup. Magic.


🔁 Key Takeaways

By the end of this lab, I had a secure, redundant, and versatile storage account configured with smart automation and fine-grained access control. Here's what I gained:

  • 🔒 Private containers restrict anonymous access by default
  • 🔁 Geo-redundant storage protects data across regions
  • 🔐 SAS links let you share individual files securely for a limited time
  • 🧠 Lifecycle rules save money by auto-moving data to cheaper tiers
  • 📥 Object replication creates a backup stream between two accounts

📌 Pro Tip: If you’re managing multiple storage accounts across dev, staging, and prod — object replication + lifecycle rules can save you serious time and cost.

Another great exercise in the books — and more proof of how powerful (and flexible) Azure Storage really is.