Comparing a pay-per-use data storage service like Amazon AWS’s S3 against a service with maximum reserved storage is a bit of an apples-to-oranges comparison. If you have, say, 100 gigabytes available through a Sugarsync account, your unused capacity will always be greater than zero. You will always pay for some amount of unused capacity, making your actual gigabyte-month costs higher than they would be if you were only charged for actual usage. That said, here are some current costs by gigabyte-month:
|Service||Option||Advertised rate||Effective rate at 50GB||Effective rate at 100GB
|Sugarsync||$7.49/month 100 GB||$.0749||$.1498||$.075
|AWS S3||less than 1TB||$.0300||$.03|
|AWS Glacier||less than 1TB||$0.007 per GB||$.007||$.007
This post got me thinking about which version of Linux to run for my AWS EC2 instances. At first, I found the AWS Linux perfectly suitable: it’s endorsed by Amazon, and obviously runs well. All of the appropriate interactions between the operating system and the virtualization system obviously work, where it isn’t immediately obvious that other version would do the same.
An investigation was inspired by the idea that AWS Linux can not (reasonably) be taken out of the AWS to do investigation, reproduce problems, or develop offline. Similarly, you can’t build an AWS instance on a local (screaming fast) machine and then push an image back to EC2.
I chose CentOS to be that much more agnostic to the hosting virtualization service, though I’m an AWS fan.
Here are some of the pleasant side effects I found of using CentOS as bonuses:
- SELinux — more security if you want it (and if you don’t)
- Clearer documentation – examples from the world
- CentOS free just like AWS Linux
- fresher repositories
- simulate environment locally
- can’t pull AWS Linux
- Build your own kernel
Since I’ve switched all my web sites providing only content to WordPress, I’m necessarily obsessed with performance and trying to find the most cost-effective way to flexibly run a site and get screamingly fast performance. I say “flexibly” because I’ve gotten used to installing a variety of software directly from the command line. This may be unnecessary as I trust the available set of WordPress modules more and more, but I still like the flexibility of managing my own Linux server.
So what’s fast enough?
An Amazon t2.micro instance provides 1.016s average total round trip on repeated viewings.
Dreamhost shared hosting averages 2.530.
But this wonderful post has me thinking that trying to draw averages may not be particularly helpful, and also implies that “max response time” values make real user experience not something that can be terribly well controlled.
An ideal blog post probably provides answers to others who may be searching. This one doesn’t. This is the accounting of the mysteries of “localhost” on CentOS linux instances. I’ll post updates as I actually start to understand the answers.
Fetching a web page from localhost is slower than fetching from 127.0.0.1
127.0.0.1: average response .015s
localhost: average response .166s
So an entire end-to-end fetch is ten times as slow using localhost, and the IP address is apparently not cached, as it doesn’t get any faster with repeated runs.
Yes, “localhost” is an alias to 127.0.0.1 in my hosts file, so I expect that an actual DNS request on the wire should not be necessary.
WordPress won’t find my database on host “localhost”, but it will find it at 127.0.0.1
One of the limitations of Amazon AWS’s EC2 IAMs is the instance role MUST be assigned at “launch” time. This is distinct from the “Start” action on an instance that was running as has been stopped. In order to associate a current instance with a new or existing IAM profile, you must create a new instance. This can be done by creating an image from the current instance and launching that.
Once an IAM role has been assigned, you may alter it. This suggests that best practice is to always launch an instance with SOME IAM role.
Are the command line tools related to IAM roles? YES, although it’s not specified, AWS command line tools will find the role of the instance it’s on, even if it’s not AWS Linux.
Assuming a role
Select the item, press and hold the command key. The path appears at the bottom of the preview pane.
Why knowing where the file isn’t considered a top-tier feature, I don’t know.
I appreciated this article on which books are worth reading. The premise is that nobody should be reading “business card books” which tout the author’s credibility, books that aren’t long enough to be worth a book, and actually good books.
I don’t agree that even most business books fall neatly into one of these categories, but I do appreciate the list of “good books”:
“MASTERY” by Robert Greene
“BOLD” by Peter Diamondis and Steven Kotler
“OUTLIERS” by Malcolm Gladwell
“WHERE GOOD IDEAS COME FROM” by Steven Johnson
“MAN’S SEARCH FOR MEANING” by Victor Frankl
“BORN STANDING UP” by Steve Martin
“ZERO TO ONE” by Peter Thiel
“QUIET” by Susan Cain
“ANTIFRAGILE” by Nassim Taleb
“MINDSET” by Carol Dweck
I’m starting with Nassim Taleb’s book: he seems to have dozens of quotable reference in every book he’s written.
I’ve had numerous issues with convincing my Mac Pro 1,1 to sleep, and almost equal problems keeping it from sleeping when it’s in the middle of doing work I actually want done.
Apple’s site is quite helpful here, when I tried everything on the list. I suspect that Bluetooth and/or network waking is the issue.