Context is Everything logo

CJK Sasha Network Consolidation — Implementation Plan

For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (- [ ]) syntax for tracking.

Goal: Move CJK Sasha Fargate container from eu-west-2 (isolated VPC/ALB) into the existing KnowCode ECS cluster in eu-west-1 (shared ALB), saving ~$28/month.

Architecture: Sasha joins the knowcode ECS cluster alongside planb-app/planb-admin. It gets a host-header rule on planb-alb for cjkassociates.app.context-is-everything.com, a new target group on port 3005, new EFS in eu-west-1, and a new ECR repo. Cloudflare DNS switches the CNAME. Old eu-west-2 infrastructure is torn down.

Tech Stack: AWS ECS Fargate, ALB, EFS, ECR, ACM, CloudWatch, Application Auto Scaling, Cloudflare DNS


Reference: Current Infrastructure

Component eu-west-2 (current) eu-west-1 (target)
ECS Cluster sasha-cluster knowcode
ECS Service sasha-cjkassociates-aws sasha-cjkassociates (new)
ALB sasha-alb planb-alb (shared)
VPC vpc-0d9e879efb2889b4d vpc-077f570e0d9b20e84
Subnets (public) subnet-03f0cc95e3c817b88, subnet-00dabfb86a31e7505 subnet-09f2ef5cee756509b, subnet-0a72a5762cf09b431
ECR 058865713619.dkr.ecr.eu-west-2.amazonaws.com/sasha-studio 058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio (new)
EFS fs-08bae33d8c55688da (new)
ACM cert *.app.context-is-everything.com (eu-west-2) (new, same domain)
ALB SG sg-01ef064c95ddb4de4 (planb-alb-sg)
ECS SG sg-0f7f18b62b151f888 (new, sasha-ecs-sg)
IAM task role sasha-ecs-task (global) same
IAM exec role sasha-ecs-execution (global) same
AWS profile knowcode-admin knowcode-admin
AWS account 058865713619 058865713619

Task 1: Request ACM Certificate in eu-west-1

Why: The ALB HTTPS listener needs a certificate covering cjkassociates.app.context-is-everything.com. The existing cert only covers *.planbbackups.io.

  • Step 1: Request the wildcard certificate
aws acm request-certificate \
  --domain-name "*.app.context-is-everything.com" \
  --validation-method DNS \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the returned CertificateArn.

  • Step 2: Get the DNS validation record
aws acm describe-certificate \
  --certificate-arn <CERT_ARN> \
  --region eu-west-1 \
  --profile knowcode-admin \
  --query 'Certificate.DomainValidationOptions[0].ResourceRecord' \
  --output json

This returns a CNAME name/value pair.

  • Step 3: Add the CNAME in Cloudflare

In Cloudflare DNS for context-is-everything.com, add the CNAME record from step 2. Set proxy status to "DNS only" (grey cloud).

  • Step 4: Wait for validation and verify
# Poll until status is ISSUED (usually 2-5 minutes)
aws acm describe-certificate \
  --certificate-arn <CERT_ARN> \
  --region eu-west-1 \
  --profile knowcode-admin \
  --query 'Certificate.Status' \
  --output text

Expected: ISSUED


Task 2: Create ECR Repository in eu-west-1

  • Step 1: Create the repo
aws ecr create-repository \
  --repository-name sasha-studio \
  --region eu-west-1 \
  --profile knowcode-admin \
  --image-scanning-configuration scanOnPush=true \
  --output json

Expected: repository URI 058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio

  • Step 2: Add lifecycle policy to limit stored images
aws ecr put-lifecycle-policy \
  --repository-name sasha-studio \
  --region eu-west-1 \
  --profile knowcode-admin \
  --lifecycle-policy-text '{"rules":[{"rulePriority":1,"description":"Keep last 10 images","selection":{"tagStatus":"any","countType":"imageCountMoreThan","countNumber":10},"action":{"type":"expire"}}]}'

Task 3: Copy Container Image to eu-west-1

  • Step 1: Authenticate crane to both ECR regions
aws ecr get-login-password --region eu-west-2 --profile knowcode-admin | \
  crane auth login 058865713619.dkr.ecr.eu-west-2.amazonaws.com --username AWS --password-stdin

aws ecr get-login-password --region eu-west-1 --profile knowcode-admin | \
  crane auth login 058865713619.dkr.ecr.eu-west-1.amazonaws.com --username AWS --password-stdin
  • Step 2: Copy the image
crane copy \
  058865713619.dkr.ecr.eu-west-2.amazonaws.com/sasha-studio:latest \
  058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio:latest \
  --platform linux/amd64
  • Step 3: Verify the image exists
aws ecr describe-images \
  --repository-name sasha-studio \
  --region eu-west-1 \
  --profile knowcode-admin \
  --query 'imageDetails[*].{tags:imageTags,pushed:imagePushedAt,size:imageSizeInBytes}' \
  --output json

Task 4: Create EFS in eu-west-1

  • Step 1: Create the file system
aws efs create-file-system \
  --creation-token sasha-efs-euwest1 \
  --performance-mode generalPurpose \
  --throughput-mode elastic \
  --encrypted \
  --tags Key=Name,Value=sasha-efs \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the returned FileSystemId.

  • Step 2: Create mount targets in both public subnets

First, create or identify a security group for EFS:

aws ec2 create-security-group \
  --group-name sasha-efs-sg \
  --description "EFS access for Sasha containers" \
  --vpc-id vpc-077f570e0d9b20e84 \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the GroupId. Allow NFS (port 2049) inbound from the VPC CIDR:

aws ec2 authorize-security-group-ingress \
  --group-id <EFS_SG_ID> \
  --protocol tcp \
  --port 2049 \
  --cidr 10.0.0.0/16 \
  --region eu-west-1 \
  --profile knowcode-admin

Create mount targets:

aws efs create-mount-target \
  --file-system-id <EFS_ID> \
  --subnet-id subnet-09f2ef5cee756509b \
  --security-groups <EFS_SG_ID> \
  --region eu-west-1 \
  --profile knowcode-admin

aws efs create-mount-target \
  --file-system-id <EFS_ID> \
  --subnet-id subnet-0a72a5762cf09b431 \
  --security-groups <EFS_SG_ID> \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 3: Create access points (matching eu-west-2 config)

Access point for /home/sasha (sasha-home):

aws efs create-access-point \
  --file-system-id <EFS_ID> \
  --posix-user Uid=1000,Gid=1000 \
  --root-directory "Path=/cjkassociates aws/home,CreationInfo={OwnerUid=1000,OwnerGid=1000,Permissions=0755}" \
  --tags Key=Name,Value=sasha-home \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the AccessPointId as SASHA_HOME_AP.

Access point for /app/data (sasha-appdata):

aws efs create-access-point \
  --file-system-id <EFS_ID> \
  --posix-user Uid=1000,Gid=1000 \
  --root-directory "Path=/cjkassociates aws/appdata,CreationInfo={OwnerUid=1000,OwnerGid=1000,Permissions=0755}" \
  --tags Key=Name,Value=sasha-appdata \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the AccessPointId as SASHA_APPDATA_AP.

  • Step 4: Wait for mount targets to become available
aws efs describe-mount-targets \
  --file-system-id <EFS_ID> \
  --region eu-west-1 \
  --profile knowcode-admin \
  --query 'MountTargets[*].{SubnetId:SubnetId,LifeCycleState:LifeCycleState}' \
  --output table

Expected: both show available (may take 1-2 minutes).


Task 5: Migrate EFS Data

The old EFS has ~1.2GB of data. Since this is cross-region, we'll use AWS DataSync.

  • Step 1: Create DataSync source location (eu-west-2 EFS)
# Get the mount target SG and subnet from eu-west-2
MT_INFO=$(aws efs describe-mount-targets \
  --file-system-id fs-08bae33d8c55688da \
  --region eu-west-2 \
  --profile knowcode-admin \
  --query 'MountTargets[0].{SubnetId:SubnetId}' \
  --output json)

aws datasync create-location-efs \
  --efs-filesystem-arn arn:aws:elasticfilesystem:eu-west-2:058865713619:file-system/fs-08bae33d8c55688da \
  --ec2-config "SubnetArn=arn:aws:ec2:eu-west-2:058865713619:subnet/$(echo $MT_INFO | jq -r '.SubnetId'),SecurityGroupArns=[arn:aws:ec2:eu-west-2:058865713619:security-group/sg-0f7f18b62b151f888]" \
  --subdirectory "/" \
  --region eu-west-2 \
  --profile knowcode-admin \
  --output json

Save LocationArn.

  • Step 2: Create DataSync destination location (eu-west-1 EFS)
aws datasync create-location-efs \
  --efs-filesystem-arn arn:aws:elasticfilesystem:eu-west-1:058865713619:file-system/<EFS_ID> \
  --ec2-config "SubnetArn=arn:aws:ec2:eu-west-1:058865713619:subnet/subnet-09f2ef5cee756509b,SecurityGroupArns=[arn:aws:ec2:eu-west-1:058865713619:security-group/<EFS_SG_ID>]" \
  --subdirectory "/" \
  --region eu-west-2 \
  --profile knowcode-admin \
  --output json

Save LocationArn.

  • Step 3: Create and start DataSync task
aws datasync create-task \
  --source-location-arn <SOURCE_LOCATION_ARN> \
  --destination-location-arn <DEST_LOCATION_ARN> \
  --name sasha-efs-migration \
  --options "VerifyMode=ONLY_FILES_TRANSFERRED,OverwriteMode=ALWAYS,PreserveDeletedFiles=PRESERVE,Uid=BOTH,Gid=BOTH,Atime=BEST_EFFORT,Mtime=PRESERVE,PosixPermissions=PRESERVE" \
  --region eu-west-2 \
  --profile knowcode-admin \
  --output json

Save TaskArn, then start:

aws datasync start-task-execution \
  --task-arn <TASK_ARN> \
  --region eu-west-2 \
  --profile knowcode-admin
  • Step 4: Monitor DataSync progress
aws datasync describe-task-execution \
  --task-execution-arn <TASK_EXECUTION_ARN> \
  --region eu-west-2 \
  --profile knowcode-admin \
  --query '{Status:Status,BytesTransferred:BytesTransferred,FilesTransferred:FilesTransferred}' \
  --output json

Expected: Status: SUCCESS after a few minutes for ~1.2GB.

Alternative (if DataSync is problematic): Use the running container to tar data, base64 it to local, then push to new EFS via a temporary ECS task. This is documented in CLAUDE.md under "Downloading Files from Sliplane" pattern.


Task 6: Create Security Group for Sasha ECS Service

  • Step 1: Create the security group
aws ec2 create-security-group \
  --group-name sasha-ecs-sg \
  --description "Sasha ECS service - allow ALB traffic on 3005" \
  --vpc-id vpc-077f570e0d9b20e84 \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the GroupId.

  • Step 2: Allow inbound from ALB security group on port 3005

This matches the pattern from planb-ecs-sg (which allows port 3000 from planb-alb-sg):

aws ec2 authorize-security-group-ingress \
  --group-id <SASHA_SG_ID> \
  --protocol tcp \
  --port 3005 \
  --source-group sg-01ef064c95ddb4de4 \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 3: Verify
aws ec2 describe-security-groups \
  --group-ids <SASHA_SG_ID> \
  --region eu-west-1 \
  --profile knowcode-admin \
  --query 'SecurityGroups[0].IpPermissions' \
  --output json

Expected: TCP 3005 from sg-01ef064c95ddb4de4.


Task 7: Create CloudWatch Log Group

  • Step 1: Create the log group
aws logs create-log-group \
  --log-group-name /ecs/sasha \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 2: Set retention
aws logs put-retention-policy \
  --log-group-name /ecs/sasha \
  --retention-in-days 30 \
  --region eu-west-1 \
  --profile knowcode-admin

Task 8: Register Task Definition in eu-west-1

  • Step 1: Export current task definition as JSON
aws ecs describe-task-definition \
  --task-definition sasha-cjkassociates:37 \
  --region eu-west-2 \
  --profile knowcode-admin \
  --query 'taskDefinition' \
  --output json > /tmp/sasha-task-def-source.json
  • Step 2: Create modified task definition JSON

Create /tmp/sasha-task-def-euwest1.json with these changes from the source:

  • image058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio:latest
  • logConfiguration.options.awslogs-regioneu-west-1
  • volumes[0].efsVolumeConfiguration.fileSystemId → new EFS ID
  • volumes[0].efsVolumeConfiguration.authorizationConfig.accessPointIdSASHA_HOME_AP
  • volumes[1].efsVolumeConfiguration.fileSystemId → new EFS ID
  • volumes[1].efsVolumeConfiguration.authorizationConfig.accessPointIdSASHA_APPDATA_AP
  • Remove fields not allowed in register: taskDefinitionArn, revision, status, requiresAttributes, compatibilities, registeredAt, registeredBy
  • Keep all 26 environment variables exactly as-is
# Use jq to transform (substitute actual IDs)
cat /tmp/sasha-task-def-source.json | jq '
  del(.taskDefinitionArn, .revision, .status, .requiresAttributes, .compatibilities, .registeredAt, .registeredBy) |
  .containerDefinitions[0].image = "058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio:latest" |
  .containerDefinitions[0].logConfiguration.options."awslogs-region" = "eu-west-1" |
  .volumes[0].efsVolumeConfiguration.fileSystemId = "<EFS_ID>" |
  .volumes[0].efsVolumeConfiguration.authorizationConfig.accessPointId = "<SASHA_HOME_AP>" |
  .volumes[1].efsVolumeConfiguration.fileSystemId = "<EFS_ID>" |
  .volumes[1].efsVolumeConfiguration.authorizationConfig.accessPointId = "<SASHA_APPDATA_AP>"
' > /tmp/sasha-task-def-euwest1.json
  • Step 3: Register the task definition
aws ecs register-task-definition \
  --cli-input-json file:///tmp/sasha-task-def-euwest1.json \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the taskDefinition.taskDefinitionArn.


Task 9: Create Target Group and ALB Listener Rule

  • Step 1: Create target group
aws elbv2 create-target-group \
  --name sasha-cjk-tg \
  --protocol HTTP \
  --port 3005 \
  --vpc-id vpc-077f570e0d9b20e84 \
  --target-type ip \
  --health-check-path "/" \
  --health-check-interval-seconds 30 \
  --health-check-timeout-seconds 5 \
  --healthy-threshold-count 2 \
  --unhealthy-threshold-count 3 \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Save the TargetGroupArn.

  • Step 2: Add ACM certificate to ALB HTTPS listener
aws elbv2 add-listener-certificates \
  --listener-arn "arn:aws:elasticloadbalancing:eu-west-1:058865713619:listener/app/planb-alb/84868bdc89bc4f54/bdc3c677bc15757a" \
  --certificates CertificateArn=<CERT_ARN> \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 3: Create host-header listener rule (priority 5)
aws elbv2 create-rule \
  --listener-arn "arn:aws:elasticloadbalancing:eu-west-1:058865713619:listener/app/planb-alb/84868bdc89bc4f54/bdc3c677bc15757a" \
  --priority 5 \
  --conditions '[{"Field":"host-header","HostHeaderConfig":{"Values":["cjkassociates.app.context-is-everything.com"]}}]' \
  --actions '[{"Type":"forward","TargetGroupArn":"<TARGET_GROUP_ARN>"}]' \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Task 10: Create ECS Service in knowcode Cluster

  • Step 1: Create the service
aws ecs create-service \
  --cluster knowcode \
  --service-name sasha-cjkassociates \
  --task-definition sasha-cjkassociates \
  --desired-count 1 \
  --launch-type FARGATE \
  --network-configuration "awsvpcConfiguration={subnets=[subnet-09f2ef5cee756509b,subnet-0a72a5762cf09b431],securityGroups=[<SASHA_SG_ID>],assignPublicIp=ENABLED}" \
  --load-balancers "targetGroupArn=<TARGET_GROUP_ARN>,containerName=sasha,containerPort=3005" \
  --enable-execute-command \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json
  • Step 2: Wait for service to stabilize
aws ecs wait services-stable \
  --cluster knowcode \
  --services sasha-cjkassociates \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 3: Verify task is running
aws ecs describe-services \
  --cluster knowcode \
  --services sasha-cjkassociates \
  --region eu-west-1 \
  --profile knowcode-admin \
  --query 'services[0].{status:status,running:runningCount,desired:desiredCount,events:events[0].message}' \
  --output json

Expected: runningCount: 1, desiredCount: 1.

  • Step 4: Check target group health
aws elbv2 describe-target-health \
  --target-group-arn <TARGET_GROUP_ARN> \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output json

Expected: target state healthy.


Task 11: Update Cloudflare DNS

  • Step 1: Update the CNAME record

In Cloudflare DNS for context-is-everything.com, update the existing record for cjkassociates.app from the old eu-west-2 ALB DNS to:

cjkassociates.app  CNAME  planb-alb-830836528.eu-west-1.elb.amazonaws.com

Set proxy status to "Proxied" (orange cloud) if it was proxied before, or "DNS only" for direct ALB access.

  • Step 2: Verify end-to-end
# Check DNS resolution
dig cjkassociates.app.context-is-everything.com

# Check HTTPS access
curl -sS -o /dev/null -w "%{http_code}" https://cjkassociates.app.context-is-everything.com/

Expected: HTTP 200 (or 302 redirect to login).

  • Step 3: Test ECS Exec on new service
TASK_ARN=$(aws ecs list-tasks --cluster knowcode --service-name sasha-cjkassociates \
  --region eu-west-1 --profile knowcode-admin --query 'taskArns[0]' --output text)

aws ecs execute-command \
  --cluster knowcode \
  --task "$TASK_ARN" \
  --container sasha \
  --region eu-west-1 \
  --profile knowcode-admin \
  --interactive \
  --command "ls /home/sasha/all-project-files/"

Expected: see the migrated files from EFS.


Task 12: Recreate Scheduled Shutdown in eu-west-1

  • Step 1: Register scalable target
aws application-autoscaling register-scalable-target \
  --service-namespace ecs \
  --resource-id service/knowcode/sasha-cjkassociates \
  --scalable-dimension ecs:service:DesiredCount \
  --min-capacity 0 \
  --max-capacity 1 \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 2: Create scale-up schedule (08:00 Mon-Fri London)
aws application-autoscaling put-scheduled-action \
  --service-namespace ecs \
  --resource-id service/knowcode/sasha-cjkassociates \
  --scalable-dimension ecs:service:DesiredCount \
  --scheduled-action-name sasha-scale-up-business-hours \
  --schedule "cron(0 8 ? * MON-FRI *)" \
  --timezone "Europe/London" \
  --scalable-target-action MinCapacity=1,MaxCapacity=1 \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 3: Create scale-down schedule (18:30 Mon-Fri London)
aws application-autoscaling put-scheduled-action \
  --service-namespace ecs \
  --resource-id service/knowcode/sasha-cjkassociates \
  --scalable-dimension ecs:service:DesiredCount \
  --scheduled-action-name sasha-scale-down-after-hours \
  --schedule "cron(30 18 ? * MON-FRI *)" \
  --timezone "Europe/London" \
  --scalable-target-action MinCapacity=0,MaxCapacity=0 \
  --region eu-west-1 \
  --profile knowcode-admin
  • Step 4: Verify schedules
aws application-autoscaling describe-scheduled-actions \
  --service-namespace ecs \
  --resource-id service/knowcode/sasha-cjkassociates \
  --region eu-west-1 \
  --profile knowcode-admin \
  --output table

Expected: two scheduled actions matching the eu-west-2 originals.


Task 13: Tear Down eu-west-2 Infrastructure

WARNING: Only proceed after Task 11 is verified and the new service has been running successfully.

  • Step 1: Remove auto-scaling schedules in eu-west-2
aws application-autoscaling delete-scheduled-action \
  --service-namespace ecs \
  --resource-id service/sasha-cluster/sasha-cjkassociates-aws \
  --scalable-dimension ecs:service:DesiredCount \
  --scheduled-action-name sasha-scale-up-business-hours \
  --region eu-west-2 \
  --profile knowcode-admin

aws application-autoscaling delete-scheduled-action \
  --service-namespace ecs \
  --resource-id service/sasha-cluster/sasha-cjkassociates-aws \
  --scalable-dimension ecs:service:DesiredCount \
  --scheduled-action-name sasha-scale-down-after-hours \
  --region eu-west-2 \
  --profile knowcode-admin

aws application-autoscaling deregister-scalable-target \
  --service-namespace ecs \
  --resource-id service/sasha-cluster/sasha-cjkassociates-aws \
  --scalable-dimension ecs:service:DesiredCount \
  --region eu-west-2 \
  --profile knowcode-admin
  • Step 2: Scale down and delete ECS service
aws ecs update-service \
  --cluster sasha-cluster \
  --service sasha-cjkassociates-aws \
  --desired-count 0 \
  --region eu-west-2 \
  --profile knowcode-admin

# Wait for tasks to drain
aws ecs wait services-stable \
  --cluster sasha-cluster \
  --services sasha-cjkassociates-aws \
  --region eu-west-2 \
  --profile knowcode-admin

aws ecs delete-service \
  --cluster sasha-cluster \
  --service sasha-cjkassociates-aws \
  --region eu-west-2 \
  --profile knowcode-admin
  • Step 3: Delete ALB, listener, target group
# Get ALB ARN
ALB_ARN=$(aws elbv2 describe-load-balancers --names sasha-alb \
  --region eu-west-2 --profile knowcode-admin \
  --query 'LoadBalancers[0].LoadBalancerArn' --output text)

# Delete listeners first
LISTENERS=$(aws elbv2 describe-listeners --load-balancer-arn $ALB_ARN \
  --region eu-west-2 --profile knowcode-admin \
  --query 'Listeners[*].ListenerArn' --output text)
for L in $LISTENERS; do
  aws elbv2 delete-listener --listener-arn $L --region eu-west-2 --profile knowcode-admin
done

# Delete ALB
aws elbv2 delete-load-balancer --load-balancer-arn $ALB_ARN \
  --region eu-west-2 --profile knowcode-admin

# Delete target group
TG_ARN=$(aws elbv2 describe-target-groups --names sasha-cjkassociates-aws \
  --region eu-west-2 --profile knowcode-admin \
  --query 'TargetGroups[0].TargetGroupArn' --output text 2>/dev/null)
if [ "$TG_ARN" != "None" ] && [ -n "$TG_ARN" ]; then
  aws elbv2 delete-target-group --target-group-arn $TG_ARN \
    --region eu-west-2 --profile knowcode-admin
fi
  • Step 4: Delete EFS (mount targets, access points, file system)
# Delete mount targets
for MT in $(aws efs describe-mount-targets --file-system-id fs-08bae33d8c55688da \
  --region eu-west-2 --profile knowcode-admin \
  --query 'MountTargets[*].MountTargetId' --output text); do
  aws efs delete-mount-target --mount-target-id $MT \
    --region eu-west-2 --profile knowcode-admin
done

# Wait for mount targets to be deleted (takes ~1 min)
sleep 60

# Delete access points
for AP in fsap-05a7822d05bb79643 fsap-0f3dd7178f1c27c09; do
  aws efs delete-access-point --access-point-id $AP \
    --region eu-west-2 --profile knowcode-admin
done

# Delete file system
aws efs delete-file-system --file-system-id fs-08bae33d8c55688da \
  --region eu-west-2 --profile knowcode-admin
  • Step 5: Delete ECR repository
aws ecr delete-repository \
  --repository-name sasha-studio \
  --force \
  --region eu-west-2 \
  --profile knowcode-admin
  • Step 6: Delete ECS cluster
aws ecs delete-cluster \
  --cluster sasha-cluster \
  --region eu-west-2 \
  --profile knowcode-admin
  • Step 7: Delete ACM certificate
aws acm delete-certificate \
  --certificate-arn arn:aws:acm:eu-west-2:058865713619:certificate/42ea99a1-6ec5-4ccd-96ee-3d083d1d6067 \
  --region eu-west-2 \
  --profile knowcode-admin
  • Step 8: Clean up VPC resources
# Delete security groups (except default)
for SG in sg-0f7f18b62b151f888; do
  aws ec2 delete-security-group --group-id $SG \
    --region eu-west-2 --profile knowcode-admin 2>/dev/null
done

# Check for and delete any NAT gateways
NATS=$(aws ec2 describe-nat-gateways --filter "Name=vpc-id,Values=vpc-0d9e879efb2889b4d" \
  --region eu-west-2 --profile knowcode-admin \
  --query 'NatGateways[?State!=`deleted`].NatGatewayId' --output text)
for NAT in $NATS; do
  aws ec2 delete-nat-gateway --nat-gateway-id $NAT \
    --region eu-west-2 --profile knowcode-admin
done

# Delete subnets
for SUB in subnet-03f0cc95e3c817b88 subnet-00dabfb86a31e7505; do
  aws ec2 delete-subnet --subnet-id $SUB \
    --region eu-west-2 --profile knowcode-admin 2>/dev/null
done

# Detach and delete internet gateway
IGW=$(aws ec2 describe-internet-gateways \
  --filters "Name=attachment.vpc-id,Values=vpc-0d9e879efb2889b4d" \
  --region eu-west-2 --profile knowcode-admin \
  --query 'InternetGateways[0].InternetGatewayId' --output text)
if [ "$IGW" != "None" ] && [ -n "$IGW" ]; then
  aws ec2 detach-internet-gateway --internet-gateway-id $IGW \
    --vpc-id vpc-0d9e879efb2889b4d --region eu-west-2 --profile knowcode-admin
  aws ec2 delete-internet-gateway --internet-gateway-id $IGW \
    --region eu-west-2 --profile knowcode-admin
fi

# Delete VPC
aws ec2 delete-vpc --vpc-id vpc-0d9e879efb2889b4d \
  --region eu-west-2 --profile knowcode-admin
  • Step 9: Check for second VPC cleanup
# The brief mentions vpc-0827f20ea02268a08 - check if it exists and has resources
aws ec2 describe-vpcs --vpc-ids vpc-0827f20ea02268a08 \
  --region eu-west-2 --profile knowcode-admin 2>&1

If it exists and is unused, repeat subnet/IGW/VPC deletion.

  • Step 10: Clean up DataSync resources
# Delete the DataSync task and locations created in Task 5
aws datasync delete-task --task-arn <TASK_ARN> \
  --region eu-west-2 --profile knowcode-admin 2>/dev/null
aws datasync delete-location --location-arn <SOURCE_LOCATION_ARN> \
  --region eu-west-2 --profile knowcode-admin 2>/dev/null
aws datasync delete-location --location-arn <DEST_LOCATION_ARN> \
  --region eu-west-2 --profile knowcode-admin 2>/dev/null

Task 14: Update Skills (eu-west-2 → eu-west-1, sasha-cluster → knowcode)

Files:

  • Modify: .claude/skills/aws-container-upload/SKILL.md

  • Modify: .claude/skills/debug-fargate/SKILL.md

  • Modify: .claude/skills/fixing-teams-transcriber/SKILL.md

  • Step 1: Update aws-container-upload skill

In .claude/skills/aws-container-upload/SKILL.md:

Change From To
Container Details table: Cluster sasha-cluster knowcode
Container Details table: Region eu-west-2 eu-west-1
Step 1 command: --cluster sasha-cluster knowcode
Step 1 command: --region eu-west-2 eu-west-1
Step 4 command: --cluster sasha-cluster knowcode
Step 4 command: --region eu-west-2 eu-west-1
Running Commands: --cluster sasha-cluster knowcode
Running Commands: --region eu-west-2 eu-west-1
  • Step 2: Update debug-fargate skill

In .claude/skills/debug-fargate/SKILL.md:

Change From To
Known Deployments table: Cluster sasha-cluster knowcode
Known Deployments table: Service sasha-cjkassociates-aws sasha-cjkassociates
Known Deployments table: Region eu-west-2 eu-west-1
Example commands: --region eu-west-2 eu-west-1
Example commands: --cluster sasha-cluster knowcode
  • Step 3: Update fixing-teams-transcriber skill

In .claude/skills/fixing-teams-transcriber/SKILL.md, update the example debug command:

  • --region eu-west-2--region eu-west-1

  • --cluster sasha-cluster--cluster knowcode

  • Step 4: Commit

git add .claude/skills/aws-container-upload/SKILL.md \
       .claude/skills/debug-fargate/SKILL.md \
       .claude/skills/fixing-teams-transcriber/SKILL.md
git commit -m "chore: update skills for CJK Sasha migration to eu-west-1 knowcode cluster"

Task 15: Update Documentation

Files:

  • Modify: docs-developer/features/aws-deploy/ecs-scheduled-shutdown.md

  • Modify: docs-developer/operations/rca-2026-02-16-cjkassociates-sqlite-corruption.md

  • Modify: docs-developer/operations/aws-cost-optimisation-log.md

  • Step 1: Update ecs-scheduled-shutdown.md

Replace all occurrences:

  • --region eu-west-2--region eu-west-1

  • sasha-clusterknowcode

  • sasha-cjkassociates-awssasha-cjkassociates

  • service/sasha-cluster/sasha-cjkassociates-awsservice/knowcode/sasha-cjkassociates

  • SNS topic ARN: update region from eu-west-2 to eu-west-1 (note: SNS topic will need recreation if used)

  • Step 2: Update RCA document

In docs-developer/operations/rca-2026-02-16-cjkassociates-sqlite-corruption.md:

  • Add a note at the top: "Note: CJK Sasha was migrated from eu-west-2 to eu-west-1 (knowcode cluster) on 2026-04-23. References below reflect the original infrastructure at time of incident."

  • This is historical — don't change the body, just annotate.

  • Step 3: Add migration entry to cost optimisation log

In docs-developer/operations/aws-cost-optimisation-log.md, add a new entry:

### 2026-04-23 — CJK Sasha consolidated into KnowCode network (eu-west-1)

**Action:** Migrated CJK Sasha from isolated VPC/ALB in eu-west-2 to shared `knowcode` ECS cluster and `planb-alb` in eu-west-1.

**Resources decommissioned (eu-west-2):**
- ECS cluster `sasha-cluster` + service `sasha-cjkassociates-aws`
- ALB `sasha-alb` + target group
- EFS `fs-08bae33d8c55688da`
- ECR `sasha-studio`
- VPC `vpc-0d9e879efb2889b4d` (subnets, IGW, SGs)
- ACM certificate `*.app.context-is-everything.com`

**Estimated monthly saving:** ~$28 (eliminated duplicate ALB $17.56/mo + VPC $8.52/mo)

**New infrastructure (eu-west-1):**
- ECS service `sasha-cjkassociates` in `knowcode` cluster
- Target group `sasha-cjk-tg` on shared `planb-alb`
- EFS (new) with same access point structure
- ECR `sasha-studio` in eu-west-1
- ACM cert `*.app.context-is-everything.com` in eu-west-1
  • Step 4: Commit
git add docs-developer/features/aws-deploy/ecs-scheduled-shutdown.md \
       docs-developer/operations/rca-2026-02-16-cjkassociates-sqlite-corruption.md \
       docs-developer/operations/aws-cost-optimisation-log.md
git commit -m "docs: update AWS references for CJK Sasha eu-west-2 → eu-west-1 migration"

Task 16: Update CLAUDE.md

Files:

  • Modify: CLAUDE.md

  • Step 1: Add CJK deployment section

After the existing "AWS ECS Fargate Deployment (sasha1)" section in CLAUDE.md, add a new section:

### AWS ECS Fargate Deployment (CJK Associates)

**CJK Associates Sasha** (`cjkassociates.app.context-is-everything.com`) runs on AWS ECS Fargate in the KnowCode network.

**Infrastructure:**
- **AWS Account**: `058865713619`
- **Region**: `eu-west-1` (Ireland)
- **Cluster**: `knowcode`
- **Service**: `sasha-cjkassociates`
- **ECR**: `058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio`
- **Storage**: EFS mounts for `/home/sasha` and `/app/data` (persistent across deploys)
- **Load Balancer**: Shared `planb-alb` with host-header routing
- **Schedule**: Auto-scales 08:00-18:30 Mon-Fri (Europe/London)

**Deploy flow** (after GHCR image is built):
```bash
# 1) Copy image from GHCR to ECR
crane copy \
  ghcr.io/context-is-everything/sasha-ai-knowledge-management:latest \
  058865713619.dkr.ecr.eu-west-1.amazonaws.com/sasha-studio:latest \
  --platform linux/amd64

# 2) Force new deployment
aws ecs update-service \
  --cluster knowcode \
  --service sasha-cjkassociates \
  --force-new-deployment \
  --region eu-west-1

Debugging: Use aws ecs execute-command for shell access, CloudWatch Logs at /ecs/sasha.


- [ ] **Step 2: Commit**

```bash
git add CLAUDE.md
git commit -m "docs: add CJK Associates AWS deployment section to CLAUDE.md"

Task 17: Update GitHub Actions for eu-west-1 ECR (if applicable)

  • Step 1: Check if any workflow pushes to eu-west-2 ECR

Review .github/workflows/ for any references to eu-west-2 ECR. Based on research, the GitHub Actions only push to GHCR (not ECR directly), so no changes are expected. Verify and skip if confirmed.

  • Step 2: Commit if changes were needed

Task 18: Clean Up IAM Users

  • Step 1: Check IAM users from the brief
for USER in sasha-deploy sasha-cloudwatch sasha-cost-readonly; do
  echo "=== $USER ==="
  aws iam list-attached-user-policies --user-name $USER --profile knowcode-admin 2>&1
  aws iam list-access-keys --user-name $USER --profile knowcode-admin 2>&1
done
  • Step 2: Assess and clean up

If these users only had permissions for eu-west-2 resources and their functions are now covered by knowcode-admin or planb-ops, delete them:

# For each user: remove policies, delete access keys, delete user
# Only proceed if confirmed unnecessary
  • Step 3: Check App Runner

The brief mentions $0.66/week App Runner costs in eu-west-2. Check and clean up:

aws apprunner list-services --region eu-west-2 --profile knowcode-admin --output json

Delete if found and no longer needed.