We all have those moments when Queue jobs fail. Sometimes it’s a bad deploy, others it’s an upstream service that’s taken a poop. Sometimes, we need to retry failed jobs, but can’t just
artisan queue retry:all, because maybe we haven’t done a cleanup of failed jobs lately.
S3 is a fantastic storage service. We use it all over the place, but sometimes it can be hard to find what you’re looking for in buckets with massive data sets. Consider the following questions:
What happens when you know the file name, but perhaps not the full prefix (path) of the file?
I hit this in production today, which is the motive of the blog post. The next question is what I thought might be a useful example, when I wanted to extrapolate what I’d learned to other use cases.
How do you find files modified on specific dates, regardless of prefix?
We run Elasticsearch in production, fronted by an API which abstracts away complex queries and presents our APIs which consume the data a consistent interface. It came to my attention recently that we had no visibility in NewRelic on external transaction time going to ES.
In a nutshell, the problem turned out to be that the elasticsearch-php-sdk uses RingPHP as a transport, which NewRelic doesn’t support.
NewRelic Before and After Instrumentation
We’ve seen occasionally poor performance on the AWS EC2 Metadata API when using IAM roles at Intouch which got me thinking. Why does theRead more
aws-pdp-sdkneed to hit the EC2 Metadata API during every request? Well, it turns out, it’s simple. If you don’t explicitly give the sdk a cache interface, then it won’t use one!
- Read more
We use Laravel for all of our APIs at Intouch Insight, so when AWS Batch was released, I started wondering about backing our Laravel Queues with AWS Batch. This seemed like the perfect opportunity to give back, since I’m sure others are looking at Batch with the same interest that I am. A few evenings of playing around, and here we are.Read more
Intermittent issues are every developer’s best friend. Recently we started hitting an error during the
npm installphase of our CI and CD jenkins jobs. Here’s the error:
In our particular use case, we’d just passed the threshold of having more than 10 internally sourced NPM dependencies, being sourced by tag directly from our GitLab server.
The solution is updating quite a simple SSH setting;
MaxStartups. Here’s the man page entry from sshd_config.
Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10.
Yes - sshd will throttle your concurrent connections while they authenticate. Increasing
MaxStartupscaused our npm installation woes to disappear from our CI environment. Huzzah!
I’ve been using the ELK stack for over three years. It’s a tool that is used daily at work, so it’s little surprise that when In-Touch Insight Systems went down the AWS Lambda road for one of our newest projects, I wasn’t happy using the default CloudWatch Logs UI.
CloudWatch Logs with Lambda Output
Initial setup of OpsWorks instances takes between 15 and 25 minutes depending on the complexity of your chef recipes.
The OpsWorks startup process injects a sequence of updates and package installations via the instance userdata before setup can run. To make matters worse, the default Ubuntu 14.04 AMI provided by AWS (at the time of writing) over six months old! YMMV but I experienced a 11 minute speedup simply in “time toRead more
running_setup” by introducing a simple custom AMI.
subscribe via RSS