Considering the Use of Flynn
Flynn is an open platform that allows many of the operational concerns regarding web application deployment to be abstracted away behind an API. It centralizes the devops work into (a) keeping the Flynn platform running, and (b) providing suitable support to the development groups using it. For a developer, deployment can be as simple as tagging a Git repository and running a single short bash script - though of course some thought has to go towards tailoring an application to work in the context of Flynn.
Flynn and its Ilk are a Reaction to Devops Scarcity
In a world in which devops experience is comparatively rare and hard to hire, it makes sense to try to lighten the burden of devops work that accompanies web application development. Ideally, every team of five developers would include one member with significant devops experience, capable of building and maintaining deployment pipelines, cloud infrastructure, and all of the automation that goes along with it. That would enable the team to be fairly autonomous in their work within the broader institutional architecture. In practice a company must be lucky, persistent, and generous with compensation in order to hit even one in twenty devops specialists. There is too little devops experience to go around: this drives centralization rather than distribution of operations, and sets up devops as a gating factor for speed of development and deployment in most organizations.
Platforms like Flynn allow some of these organizational issues to be addressed, and can be used to provide more autonomy and speed of development to application development groups without the need for deep devops experience. After spending some time working with Flynn, it has to be said that it isn't a magic wand: reductions in overall workload and reduced overall development time and cost of ownership for applications deployed to Flynn are very plausible goals, but some fraction of the devops work that vanishes from development teams is only moved to the group that maintains Flynn. It doesn't evaporate.
Expect a Lot of Detail-Level Work in Setting Up Flynn From Scratch
Setting up a robust Flynn infrastructure isn't a trivial undertaking. Beyond the sparse basics expect to spend a lot of time on edge cases, mysterious failures deep inside Flynn that require help from the core developers to fix, and tailoring the setup to the particular organizational use case. The peg never quite fits the hole, and there will always be rough corners.
It is a very good idea to spend time on carefully pulling together requirements from the various teams that will be using Flynn before starting work on a first version of the platform, more so than is usually the case in these matters. If following the path of discovery by incremental development, expect to encounter numerous dead ends and late consequences of early decisions as the iterations progress.
Become Comfortable with Containerization
Flynn is built on containers in the Docker sense, though it uses a custom container implementation to achieve a similar outcome. Most of the fiddly details and issues encountered in the use of Flynn will be found in the container layer, so a familiarity with the idioms of container-based development is necessary.
Internal Applications are the Easiest Use Case
The cost of devops for small internal applications and APIs, and even mid-sized companies might have dozens or more of these in the stable, is a particular pain point. The amount of work to manage deployment and operations for such applications is sizable in comparison to the work required to produce them, which definitely discourages exploration and automation, even putting to one side the lack of devops engineers to carry out that work.
Conversely, these applications tend to have low levels of traffic and generate little load on a server when running. That makes them the best test bed for a new Flynn platform deployment: rolling out Flynn to support small internal applications is far, far less complicated and onerous than trying to build out a production-ready Flynn platform for high traffic customer-facing applications.
So start here. It should be fairly straightforward to demonstrate the value of Flynn to the organization by (a) the degree to which developers adopt it for their utility applications, and (b) the degree to which devops workload reduction in the development teams encourages the creation of more such applications.
Stateless Applications are also Much Easier
It is also a good idea to start with simple, stateless applications rather than diving right into management of persistent data and containerized databases with Flynn. If persistence must be on the agenda, then consider the use of data layer resources outside the Flynn cluster, at least initially. Database backup and operations concerns are always much more onerous than those accompanying stateless applications, and that is just as true of Flynn as elsewhere. Further, Flynn hasn't yet progressed to the point at which the management of large containerized databases is viable for production systems.
Training, Documentation, and Examples are Needed
Applications deployed to Flynn have their own best practices and quirks. Decompose an application into multiple processes and have to figure out how they communicate, or deploy a single process monolith? Setting up all configuration to tie to a few environment variables. Late provision of those environment variables, significantly after startup. Differing approaches to log management and monitoring. How to deploy and manage a database in Flynn. And so forth.
Deployment, while just a couple of commands issued in the application Git repository, is nonetheless best wrapped up in a standardized shell script of some sort. There will be organization-specific details to a Flynn cluster and how to use it that are not covered in the available online documentation, and those are best abstracted away. This means that documention must cover and explain the differences.
None of this is hard to figure out, certainly easier than becoming familiar with the ins and outs of CloudFormation, but adoption in an organization will definitely require some training, documentation beyond the basics found online, more tailored to the particular use cases anticipated, and examples to demonstrate the best practices. All of this takes time, but it is time well spent.
Flynn Applications and Configuration
An application deployed to Flynn by necessity has some differences from an application deployed in a more traditional fashion. This is most apparent when it comes to configuration. Flynn deployment is Git based:
cd path/to/example-application git checkout v0.1.0 flynn \ -c qa-cluster \ create example-application
What is checked in is what is deployed and launched, nothing more. So how to tell the application what environment it is in, and where it should look for more detailed configuration? This is accomplished by setting environment variables after deployment.
flynn \ -c qa-cluster \ -a example-application \ env set \ ENVIRONMENT=qa \ SECRET=value \ ...
One important side-effect of this way of doing this is that an application must be able to run without its configuration, at least serving an error page without immediately crashing. The application process will be restarted when the environment is set, but until then it is on its own. If the application crashes immediately without its configuration, then the container will just keep restarting it until the environment is set, which can lead to an undesirable degree of thrashing.
Multiple Processes or Single Process?
Another area in which the design of applications differs for Flynn lies in the ability to split an application out into multiple processes. A Procfile
can contain more than just a web application definition:
# Web application. web: node web/index.js # Gather up data from various sources. collector: node collector/index.js # An application to run janitor tasks once a day. janitor: node janitory/index.js
Any self-contained chunk of functionality in an application might be broken out into its own process in order to run at a different scale from the other parts of the application. The ability to scale concurrency for individual application processes via the Flynn API can be a useful shortcut in some circumstances, cutting down on the amount of development time required to get the application up and running. For the example above:
# Launch five collector processes to gather data for the single webapp process. flynn \ -c qa-cluster \ -a example-application \ scale \ web=1 \ collector=5 \ janitor=1
One important caveat here is that the various processes are completely isolated from one another, running in separate containers. The can only communicate via some external intermediary, such as a database, shared network file system, or the like. In most standard applications, the file system or sockets could be used, but that isn't the case in Flynn.
Ongoing Support Requires Ongoing Effort
Exactly the sort of fiddly, esoteric container issues and other problems deep inside Flynn that occurred during initial setup will continue to happen as people use the system. Application developers will find ways to mess up their deployments, or hang the whole platform. The initial documentation and training will be found to have important holes and missing topics.
The effort put in to maintain a Flynn platform will pay off, as it should be considerably less work - and especially less devops work - than would otherwise be the case for the applications now deployed to Flynn rather than in other ways. But less work is not an absence of work. Expect at least one devops engineer to spend much of their time supporting Flynn as the deployed application count rises, and as more critical applications are deployed to the platform. This is the case even if the development of Flynn platform capabilities halts at a satisfactory stopping point for the organization.