Mitch Garnaat

Merge pull request #35 from garnaat/python-refactor

A WIP commit on the new refactor for support of Python and other features
Showing 75 changed files with 2991 additions and 643 deletions
......@@ -3,6 +3,7 @@ python:
- "2.7"
- "3.3"
- "3.4"
- "3.5"
install:
- pip install -r requirements.txt
- pip install coverage python-coveralls
......
......@@ -2,4 +2,4 @@ include README.md
include LICENSE
include requirements.txt
include kappa/_version
recursive-include samples *.js *.yml *.cf *.json
recursive-include samples *.js *.py *.yml *.cf *.json *.txt
......
......@@ -22,12 +22,12 @@ in a Push model (e.g. S3, SNS) rather than a Pull model.
* Add an event source to the function
* View the output of the live function
Kappa tries to help you with some of this. It allows you to create an IAM
managed policy or use an existing one. It creates the IAM execution role for
you and associates the policy with it. Kappa will zip up the function and
any dependencies and upload them to AWS Lambda. It also sends test data
to the uploaded function and finds the related CloudWatch log stream and
displays the log events. Finally, it will add the event source to turn
Kappa tries to help you with some of this. It creates all IAM policies for you
based on the resources you have told it you need to access. It creates the IAM
execution role for you and associates the policy with it. Kappa will zip up
the function and any dependencies and upload them to AWS Lambda. It also sends
test data to the uploaded function and finds the related CloudWatch log stream
and displays the log events. Finally, it will add the event source to turn
your function on.
If you need to make changes, kappa will allow you to easily update your Lambda
......@@ -39,58 +39,201 @@ Installation
The quickest way to get kappa is to install the latest stable version via pip:
pip install kappa
Or for the development version:
pip install git+https://github.com/garnaat/kappa.git
Getting Started
---------------
Kappa is a command line tool. The basic command format is:
kappa <path to config file> <command> [optional command args]
Where ``command`` is one of:
* create - creates the IAM policy (if necessary), the IAM role, and zips and
uploads the Lambda function code to the Lambda service
* invoke - make a synchronous call to your Lambda function, passing test data
and display the resulting log data
* invoke_async - make an asynchronous call to your Lambda function passing test
data.
* dryrun - make the call but only check things like permissions and report
back. Don't actually run the code.
* tail - display the most recent log events for the function (remember that it
can take several minutes before log events are available from CloudWatch)
* add_event_sources - hook up an event source to your Lambda function
* delete - delete the Lambda function, remove any event sources, delete the IAM
policy and role
* update_code - Upload new code for your Lambda function
* update_event_sources - Update the event sources based on the information in
your kappa config file
* status - display summary information about functions, stacks, and event
sources related to your project.
The ``config file`` is a YAML format file containing all of the information
about your Lambda function.
If you use environment variables for your AWS credentials (as normally supported by boto),
simply exclude the ``profile`` element from the YAML file.
An example project based on a Kinesis stream can be found in
[samples/kinesis](https://github.com/garnaat/kappa/tree/develop/samples/kinesis).
The basic workflow is:
* Create your Lambda function
* Create any custom IAM policy you need to execute your Lambda function
* Create some sample data
* Create the YAML config file with all of the information
* Run ``kappa <path-to-config> create`` to create roles and upload function
* Run ``kappa <path-to-config> invoke`` to invoke the function with test data
* Run ``kappa <path-to-config> update_code`` to upload new code for your Lambda
function
* Run ``kappa <path-to-config> add_event_sources`` to hook your function up to the event source
* Run ``kappa <path-to-config> tail`` to see more output
Quick Start
-----------
To get a feel for how kappa works, let's take a look at a very simple example
contained in the ``samples/simple`` directory of the kappa distribution. This
example is so simple, in fact, that it doesn't really do anything. It's just a
small Lambda function (written in Python) that accepts some JSON input, logs
that input to CloudWatch logs, and returns a JSON document back.
The structure of the directory is:
```
simple/
├── _src
│   ├── README.md
│   ├── requirements.txt
│   ├── setup.cfg
│   └── simple.py
├── _tests
│   └── test_one.json
└── kappa.yml.sample
```
Within the directory we see:
* `kappa.yml.sample` which is a sample YAML configuration file for the project
* `_src` which is a directory containing the source code for the Lambda function
* `_test` which is a directory containing some test data
The first step is to make a copy of the sample configuration file:
$ cd simple
$ cp kappa.yml.simple kappa.yml
Now you will need to edit ``kappa.yml`` slightly for your use. The file looks
like this:
```
---
name: kappa-simple
environments:
dev:
profile: <your profile here>
region: <your region here>
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
prod:
profile: <your profile here>
region: <your region here>
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
lambda:
description: A very simple Kappa example
handler: simple.handler
runtime: python2.7
memory_size: 128
timeout: 3
```
The ``name`` at the top is just a name used for this Lambda function and other
things we create that are related to this Lambda function (e.g. roles,
policies, etc.).
The ``environments`` section is where we define the different environments into
which we wish to deploy this Lambda function. Each environment is identified
by a ``profile`` (as used in the AWS CLI and other AWS tools) and a
``region``. You can define as many environments as you wish but each
invocation of ``kappa`` will deal with a single environment. Each environment
section also includes a ``policy`` section. This is where we tell kappa about
AWS resources that our Lambda function needs access to and what kind of access
it requires. For example, your Lambda function may need to read from an SNS
topic or write to a DynamoDB table and this is where you would provide the ARN
([Amazon Resource Name](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html))
that identify those resources. Since this is a very simple example, the only
resource listed here is for CloudWatch logs so that our Lambda function is able
to write to the CloudWatch log group that will be created for it automatically
by AWS Lambda.
The ``lambda`` section contains the configuration information about our Lambda
function. These values are passed to Lambda when we create the function and
can be updated at any time after.
To modify this for your own use, you just need to put in the right values for
``profile`` and ``region`` in one of the environment sections. You can also
change the names of the environments to be whatever you like but the name
``dev`` is the default value used by kappa so it's kind of handy to avoid
typing.
Once you have made the necessary modifications, you should be ready to deploy
your Lambda function to the AWS Lambda service. To do so, just do this:
```
$ kappa deploy
```
This assumes you want to deploy the default environment called ``dev`` and that
you have named your config file ``kappa.yml``. If, instead, you called your
environment ``test`` and named your config file foo.yml, you would do this:
```
$ kappa --env test --config foo.yml deploy
```
In either case, you should see output that looks something like this:
```
$ kappa deploy
deploying
...deploying policy kappa-simple-dev
...creating function kappa-simple-dev
done
$
```
So, what kappa has done is it has created a new Managed Policy called
``kappa-simple-dev`` that grants access to the CloudWatch Logs service. It has
also created an IAM role called ``kappa-simple-dev`` that uses that policy.
And finally it has zipped up our Python code and created a function in AWS
Lambda called kappa-simple-dev.
To test this out, try this:
```
$ kappa invoke _tests/test_one.json
invoking
START RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Version: $LATEST
[DEBUG] 2015-12-08T22:00:15.363Z 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f {u'foo': u'bar', u'fie': u'baz'}
END RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f
REPORT RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Duration: 0.40 ms Billed Duration: 100 ms Memory Size: 256 MB Max Memory Used: 23 MB
Response:
{"status": "success"}
done
$
```
We have just called our Lambda function, passing in the contents of the file
``_tests/test_one.json`` as input to our function. We can see the output of
the CloudWatch logs for the call and we can see the logging call in the Python
function that prints out the ``event`` (the data) passed to the function. And
finally, we can see the Response from the function which, for now, is just a
hard-coded data structure returned by the function.
Need to make a change in your function, your list of resources, or your
function configuration? Just go ahead and make the change and then re-run the
``deploy`` command:
$ kappa deploy
Kappa will figure out what has changed and make the necessary updates for you.
That gives you a quick overview of kappa. To learn more about it, I recommend
you check out the tutorial.
Policies
--------
Hands up who loves writing IAM policies. Yeah, that's what I thought. With
Kappa, there is a simplified way of writing policies and granting your Lambda
function the permissions it needs.
The simplified version allows you to specify, in your `kappa.yml` file, the
ARN of the resource you want to access, and then a list of the API methods you
want to allow. For example:
```
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
```
To express this using the official IAM policy format, you can instead use a
statement:
```
policy:
statements:
- Effect: Allow
Resource: "*"
Action:
- "logs:*"
```
Both of these do the same thing.
......
#!/usr/bin/env python
# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from datetime import datetime
import logging
import base64
import click
from kappa.context import Context
@click.group()
@click.argument(
'config',
type=click.File('rb'),
envvar='KAPPA_CONFIG',
)
@click.option(
'--debug/--no-debug',
default=False,
help='Turn on debugging output'
)
@click.pass_context
def cli(ctx, config=None, debug=False):
config = config
ctx.obj['debug'] = debug
ctx.obj['config'] = config
@cli.command()
@click.pass_context
def create(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('creating...')
context.create()
click.echo('...done')
@cli.command()
@click.pass_context
def update_code(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('updating code...')
context.update_code()
click.echo('...done')
@cli.command()
@click.pass_context
def invoke(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('invoking...')
response = context.invoke()
log_data = base64.b64decode(response['LogResult'])
click.echo(log_data)
click.echo('...done')
@cli.command()
@click.pass_context
def dryrun(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('invoking dryrun...')
response = context.dryrun()
click.echo(response)
click.echo('...done')
@cli.command()
@click.pass_context
def invoke_async(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('invoking async...')
response = context.invoke_async()
click.echo(response)
click.echo('...done')
@cli.command()
@click.pass_context
def tail(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('tailing logs...')
for e in context.tail()[-10:]:
ts = datetime.utcfromtimestamp(e['timestamp']//1000).isoformat()
click.echo("{}: {}".format(ts, e['message']))
click.echo('...done')
@cli.command()
@click.pass_context
def status(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
status = context.status()
click.echo(click.style('Policy', bold=True))
if status['policy']:
line = ' {} ({})'.format(
status['policy']['PolicyName'],
status['policy']['Arn'])
click.echo(click.style(line, fg='green'))
click.echo(click.style('Role', bold=True))
if status['role']:
line = ' {} ({})'.format(
status['role']['Role']['RoleName'],
status['role']['Role']['Arn'])
click.echo(click.style(line, fg='green'))
click.echo(click.style('Function', bold=True))
if status['function']:
line = ' {} ({})'.format(
status['function']['Configuration']['FunctionName'],
status['function']['Configuration']['FunctionArn'])
click.echo(click.style(line, fg='green'))
else:
click.echo(click.style(' None', fg='green'))
click.echo(click.style('Event Sources', bold=True))
if status['event_sources']:
for event_source in status['event_sources']:
if event_source:
line = ' {}: {}'.format(
event_source['EventSourceArn'], event_source['State'])
click.echo(click.style(line, fg='green'))
else:
click.echo(click.style(' None', fg='green'))
@cli.command()
@click.pass_context
def delete(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('deleting...')
context.delete()
click.echo('...done')
@cli.command()
@click.pass_context
def add_event_sources(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('adding event sources...')
context.add_event_sources()
click.echo('...done')
@cli.command()
@click.pass_context
def update_event_sources(ctx):
context = Context(ctx.obj['config'], ctx.obj['debug'])
click.echo('updating event sources...')
context.update_event_sources()
click.echo('...done')
if __name__ == '__main__':
cli(obj={})
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/kappa.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/kappa.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/kappa"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/kappa"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
Commands
========
Kappa is a command line tool. The basic command format is:
``kappa [options] <command> [optional command args]``
Available ``options`` are:
* --config <config_file> to specify where to find the kappa config file. The
default is to look in ``kappa.yml``.
* --env <environment> to specify which environment in your config file you are
using. The default is ``dev``.
* --debug/--no-debug to turn on/off the debug logging.
* --help to access command line help.
And ``command`` is one of:
* deploy
* delete
* invoke
* tag
* tail
* event_sources
* status
Details of each command are provided below.
deploy
------
The ``deploy`` command does whatever is required to deploy the
current version of your Lambda function such as creating/updating policies and
roles, creating or updating the function itself, and adding any event sources
specified in your config file.
When the command is run the first time, it creates all of the relevant
resources required. On subsequent invocations, it will attempt to determine
what, if anything, has changed in the project and only update those resources.
delete
------
The ``delete`` command deletes the Lambda function, remove any event sources,
delete the IAM policy and role.
invoke
------
The ``invoke`` command makes a synchronous call to your Lambda function,
passing test data and display the resulting log data and any response returned
from your Lambda function.
The ``invoke`` command takes one positional argument, the ``data_file``. This
should be the path to a JSON data file that will be sent to the function as
data.
tag
---
The ``tag`` command tags the current version of the Lambda function with a
symbolic tag. In Lambda terms, this creates an ``alias``.
The ``tag`` command requires two additional positional arguments:
* name - the name of tag or alias
* description - the description of the alias
tail
----
The ``tail`` command displays the most recent log events for the function
(remember that it can take several minutes before log events are available from CloudWatch)
test
----
The ``test`` command provides a way to run unit tests of code in your Lambda
function. By default, it uses the ``nose`` Python testrunner but this can be
overridden my specifying an alternative value using the ``unit_test_runner``
attribute in the kappa config file.
When using nose, it expects to find standard Python unit tests in the
``_tests/unit`` directory of your project. It will then run those tests in an
environment that also makes any python modules in your ``_src`` directory
available to the tests.
event_sources
-------------
The ``event_sources`` command provides access the commands available for
dealing with event sources. This command takes an additional positional
argument, ``command``.
* command - the command to run (list|enable|disable)
status
------
The ``status`` command displays summary information about functions, stacks,
and event sources related to your project.
# -*- coding: utf-8 -*-
#
# kappa documentation build configuration file, created by
# sphinx-quickstart on Tue Oct 13 12:59:27 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import shlex
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'kappa'
copyright = u'2015, Mitch Garnaat'
author = u'Mitch Garnaat'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.4.0'
# The full version, including alpha/beta/rc tags.
release = '0.4.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'kappadoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'kappa.tex', u'kappa Documentation',
u'Mitch Garnaat', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'kappa', u'kappa Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'kappa', u'kappa Documentation',
author, 'kappa', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
The Config File
===============
The config file is at the heart of kappa. It is what describes your functions
and drives your deployments. This section provides a reference for all of the
elements of the kappa config file.
Example
-------
Here is an example config file showing all possible sections.
.. sourcecode:: yaml
:linenos:
---
name: kappa-python-sample
environments:
env1:
profile: profile1
region: us-west-2
policy:
resources:
- arn: arn:aws:dynamodb:us-west-2:123456789012:table/foo
actions:
- "*"
- arn: arn:aws:logs:*:*:*
actions:
- "*"
event_sources:
-
arn: arn:aws:kinesis:us-west-2:123456789012:stream/foo
starting_position: LATEST
batch_size: 100
env2:
profile: profile2
region: us-west-2
policy_resources:
- arn: arn:aws:dynamodb:us-west-2:234567890123:table/foo
actions:
- "*"
- arn: arn:aws:logs:*:*:*
actions:
- "*"
event_sources:
-
arn: arn:aws:kinesis:us-west-2:234567890123:stream/foo
starting_position: LATEST
batch_size: 100
lambda:
description: A simple Python sample
handler: simple.handler
runtime: python2.7
memory_size: 256
timeout: 3
vpc_config:
security_group_ids:
- sg-12345678
- sg-23456789
subnet_ids:
- subnet-12345678
- subnet-23456789
Explanations:
=========== =============================================================
Line Number Description
=========== =============================================================
2 This name will be used to name the function itself as well as
any policies and roles created for use by the function.
3 A map of environments. Each environment represents one
possible deployment target. For example, you might have a
dev and a prod. The names can be whatever you want but the
environment names are specified using the --env option when
you deploy.
5 The profile name associated with this environment. This
refers to a profile in your AWS credential file.
6 The AWS region associated with this environment.
7 This section defines the elements of the IAM policy that will
be created for this function in this environment.
9 Each resource your function needs access to needs to be
listed here. Provide the ARN of the resource as well as
a list of actions. This could be wildcarded to allow all
actions but preferably should list the specific actions you
want to allow.
15 If your Lambda function has any event sources, this would be
where you list them. Here, the example shows a Kinesis
stream but this could also be a DynamoDB stream, an SNS
topic, or an S3 bucket.
18 For Kinesis streams and DynamoDB streams, you can specify
the starting position (one of LATEST or TRIM_HORIZON) and
the batch size.
35 This section contains settings specify to your Lambda
function. See the Lambda docs for details on these.
=========== =============================================================
.. kappa documentation master file, created by
sphinx-quickstart on Tue Oct 13 12:59:27 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to kappa's documentation
================================
Contents:
.. toctree::
:maxdepth: 2
why
how
config_file_example
commands
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 2> nul
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\kappa.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\kappa.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end
Why kappa?
==========
You can do everything kappa does by using the AWS Management Console so why use
kappa? Basically, because using GUI interfaces to drive your production
environment is a really bad idea. You can't really automate GUI interfaces,
you can't debug GUI interfaces, and you can't easily share techniques and best
practices with a GUI.
The goal of kappa is to put everything about your AWS Lambda function into
files on a filesystem which can be easily versioned and shared. Once your
files are in git, people on your team can create pull requests to merge new
changes in and those pull requests can be reviewed, commented on, and
eventually approved. This is a tried and true approach that has worked for
more traditional deployment methodologies and will also work for AWS Lambda.
# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://aws.amazon.com/apache2.0/
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
......
# Copyright (c) 2014,2015 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import boto3
class __AWS(object):
def __init__(self, profile_name=None, region_name=None):
self._client_cache = {}
self._session = boto3.session.Session(
region_name=region_name, profile_name=profile_name)
def create_client(self, client_name):
if client_name not in self._client_cache:
self._client_cache[client_name] = self._session.client(
client_name)
return self._client_cache[client_name]
__Singleton_AWS = None
def get_aws(context):
global __Singleton_AWS
if __Singleton_AWS is None:
__Singleton_AWS = __AWS(context.profile, context.region)
return __Singleton_AWS
# Copyright (c) 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import jmespath
import boto3
LOG = logging.getLogger(__name__)
_session_cache = {}
class AWSClient(object):
def __init__(self, service_name, session):
self._service_name = service_name
self._session = session
self.client = self._create_client()
@property
def service_name(self):
return self._service_name
@property
def session(self):
return self._session
@property
def region_name(self):
return self.client.meta.region_name
def _create_client(self):
client = self._session.client(self._service_name)
return client
def call(self, op_name, query=None, **kwargs):
"""
Make a request to a method in this client. The response data is
returned from this call as native Python data structures.
This method differs from just calling the client method directly
in the following ways:
* It automatically handles the pagination rather than
relying on a separate pagination method call.
* You can pass an optional jmespath query and this query
will be applied to the data returned from the low-level
call. This allows you to tailor the returned data to be
exactly what you want.
:type op_name: str
:param op_name: The name of the request you wish to make.
:type query: str
:param query: A jmespath query that will be applied to the
data returned by the operation prior to returning
it to the user.
:type kwargs: keyword arguments
:param kwargs: Additional keyword arguments you want to pass
to the method when making the request.
"""
LOG.debug(kwargs)
if query:
query = jmespath.compile(query)
if self.client.can_paginate(op_name):
paginator = self.client.get_paginator(op_name)
results = paginator.paginate(**kwargs)
data = results.build_full_result()
else:
op = getattr(self.client, op_name)
data = op(**kwargs)
if query:
data = query.search(data)
return data
def create_session(profile_name, region_name):
global _session_cache
session_key = '{}:{}'.format(profile_name, region_name)
if session_key not in _session_cache:
session = boto3.session.Session(
region_name=region_name, profile_name=profile_name)
_session_cache[session_key] = session
return _session_cache[session_key]
def create_client(service_name, session):
return AWSClient(service_name, session)
# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://aws.amazon.com/apache2.0/
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import yaml
import time
import os
import shutil
import kappa.function
import kappa.restapi
import kappa.event_source
import kappa.policy
import kappa.role
import kappa.awsclient
import placebo
LOG = logging.getLogger(__name__)
DebugFmtString = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
InfoFmtString = '\t%(message)s'
InfoFmtString = '...%(message)s'
class Context(object):
def __init__(self, config_file, debug=False):
def __init__(self, config_file, environment=None,
debug=False, recording_path=None):
if debug:
self.set_logger('kappa', logging.DEBUG)
else:
self.set_logger('kappa', logging.INFO)
self._load_cache()
self.config = yaml.load(config_file)
if 'policy' in self.config.get('iam', ''):
self.policy = kappa.policy.Policy(
self, self.config['iam']['policy'])
else:
self.policy = None
if 'role' in self.config.get('iam', ''):
self.role = kappa.role.Role(
self, self.config['iam']['role'])
else:
self.role = None
self.environment = environment
profile = self.config['environments'][self.environment]['profile']
region = self.config['environments'][self.environment]['region']
self.session = kappa.awsclient.create_session(profile, region)
if recording_path:
self.pill = placebo.attach(self.session, recording_path)
self.pill.record()
self.policy = kappa.policy.Policy(
self, self.config['environments'][self.environment])
self.role = kappa.role.Role(
self, self.config['environments'][self.environment])
self.function = kappa.function.Function(
self, self.config['lambda'])
if 'restapi' in self.config:
self.restapi = kappa.restapi.RestApi(
self, self.config['restapi'])
else:
self.restapi = None
self.event_sources = []
self._create_event_sources()
def _load_cache(self):
self.cache = {}
if os.path.isdir('.kappa'):
cache_file = os.path.join('.kappa', 'cache')
if os.path.isfile(cache_file):
with open(cache_file, 'r') as fp:
self.cache = yaml.load(fp)
def _delete_cache(self):
if os.path.isdir('.kappa'):
shutil.rmtree('.kappa')
self.cache = {}
def _save_cache(self):
if not os.path.isdir('.kappa'):
os.mkdir('.kappa')
cache_file = os.path.join('.kappa', 'cache')
with open(cache_file, 'w') as fp:
yaml.dump(self.cache, fp)
def get_cache_value(self, key):
return self.cache.setdefault(self.environment, dict()).get(key)
def set_cache_value(self, key, value):
self.cache.setdefault(
self.environment, dict())[key] = value.encode('utf-8')
self._save_cache()
@property
def name(self):
return self.config.get('name', os.path.basename(os.getcwd()))
@property
def profile(self):
return self.config.get('profile', None)
return self.config['environments'][self.environment]['profile']
@property
def region(self):
return self.config.get('region', None)
return self.config['environments'][self.environment]['region']
@property
def record(self):
return self.config.get('record', False)
@property
def lambda_config(self):
return self.config.get('lambda', None)
return self.config.get('lambda')
@property
def test_dir(self):
return self.config.get('tests', '_tests')
@property
def source_dir(self):
return self.config.get('source', '_src')
@property
def unit_test_runner(self):
return self.config.get('unit_test_runner',
'nosetests . ../{}/unit/'.format(self.test_dir))
@property
def exec_role_arn(self):
......@@ -92,8 +156,9 @@ class Context(object):
log.addHandler(ch)
def _create_event_sources(self):
if 'event_sources' in self.config['lambda']:
for event_source_cfg in self.config['lambda']['event_sources']:
env_cfg = self.config['environments'][self.environment]
if 'event_sources' in env_cfg:
for event_source_cfg in env_cfg['event_sources']:
_, _, svc, _ = event_source_cfg['arn'].split(':', 3)
if svc == 'kinesis':
self.event_sources.append(
......@@ -122,6 +187,23 @@ class Context(object):
for event_source in self.event_sources:
event_source.update(self.function)
def list_event_sources(self):
event_sources = []
for event_source in self.event_sources:
event_sources.append({'arn': event_source.arn,
'starting_position': event_source.starting_position,
'batch_size': event_source.batch_size,
'enabled': event_source.enabled})
return event_sources
def enable_event_sources(self):
for event_source in self.event_sources:
event_source.enable(self.function)
def disable_event_sources(self):
for event_source in self.event_sources:
event_source.enable(self.function)
def create(self):
if self.policy:
self.policy.create()
......@@ -133,12 +215,31 @@ class Context(object):
LOG.debug('Waiting for policy/role propogation')
time.sleep(5)
self.function.create()
self.add_event_sources()
def deploy(self):
if self.policy:
self.policy.deploy()
if self.role:
self.role.create()
self.function.deploy()
if self.restapi:
self.restapi.deploy()
def invoke(self, data):
return self.function.invoke(data)
def update_code(self):
self.function.update()
def unit_tests(self):
# run any unit tests
unit_test_path = os.path.join(self.test_dir, 'unit')
if os.path.exists(unit_test_path):
os.chdir(self.source_dir)
print('running unit tests')
pipe = os.popen(self.unit_test_runner, 'r')
print(pipe.read())
def invoke(self):
return self.function.invoke()
def test(self):
return self.unit_tests()
def dryrun(self):
return self.function.dryrun()
......@@ -154,12 +255,15 @@ class Context(object):
event_source.remove(self.function)
self.function.log.delete()
self.function.delete()
if self.restapi:
self.restapi.delete()
time.sleep(5)
if self.role:
self.role.delete()
time.sleep(5)
if self.policy:
self.policy.delete()
self._delete_cache()
def status(self):
status = {}
......
# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://aws.amazon.com/apache2.0/
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from botocore.exceptions import ClientError
import kappa.aws
import kappa.awsclient
LOG = logging.getLogger(__name__)
......@@ -32,7 +33,7 @@ class EventSource(object):
@property
def starting_position(self):
return self._config.get('starting_position', 'TRIM_HORIZON')
return self._config.get('starting_position', 'LATEST')
@property
def batch_size(self):
......@@ -40,19 +41,20 @@ class EventSource(object):
@property
def enabled(self):
return self._config.get('enabled', True)
return self._config.get('enabled', False)
class KinesisEventSource(EventSource):
def __init__(self, context, config):
super(KinesisEventSource, self).__init__(context, config)
aws = kappa.aws.get_aws(context)
self._lambda = aws.create_client('lambda')
self._lambda = kappa.awsclient.create_client(
'lambda', context.session)
def _get_uuid(self, function):
uuid = None
response = self._lambda.list_event_source_mappings(
response = self._lambda.call(
'list_event_source_mappings',
FunctionName=function.name,
EventSourceArn=self.arn)
LOG.debug(response)
......@@ -62,7 +64,8 @@ class KinesisEventSource(EventSource):
def add(self, function):
try:
response = self._lambda.create_event_source_mapping(
response = self._lambda.call(
'create_event_source_mapping',
FunctionName=function.name,
EventSourceArn=self.arn,
BatchSize=self.batch_size,
......@@ -73,12 +76,37 @@ class KinesisEventSource(EventSource):
except Exception:
LOG.exception('Unable to add event source')
def enable(self, function):
self._config['enabled'] = True
try:
response = self._lambda.call(
'update_event_source_mapping',
FunctionName=function.name,
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to enable event source')
def disable(self, function):
self._config['enabled'] = False
try:
response = self._lambda.call(
'update_event_source_mapping',
FunctionName=function.name,
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to disable event source')
def update(self, function):
response = None
uuid = self._get_uuid(function)
if uuid:
try:
response = self._lambda.update_event_source_mapping(
response = self._lambda.call(
'update_event_source_mapping',
BatchSize=self.batch_size,
Enabled=self.enabled,
FunctionName=function.arn)
......@@ -90,7 +118,8 @@ class KinesisEventSource(EventSource):
response = None
uuid = self._get_uuid(function)
if uuid:
response = self._lambda.delete_event_source_mapping(
response = self._lambda.call(
'delete_event_source_mapping',
UUID=uuid)
LOG.debug(response)
return response
......@@ -101,7 +130,8 @@ class KinesisEventSource(EventSource):
uuid = self._get_uuid(function)
if uuid:
try:
response = self._lambda.get_event_source_mapping(
response = self._lambda.call(
'get_event_source_mapping',
UUID=self._get_uuid(function))
LOG.debug(response)
except ClientError:
......@@ -121,8 +151,7 @@ class S3EventSource(EventSource):
def __init__(self, context, config):
super(S3EventSource, self).__init__(context, config)
aws = kappa.aws.get_aws(context)
self._s3 = aws.create_client('s3')
self._s3 = kappa.awsclient.create_client('s3', context.session)
def _make_notification_id(self, function_name):
return 'Kappa-%s-notification' % function_name
......@@ -132,7 +161,7 @@ class S3EventSource(EventSource):
def add(self, function):
notification_spec = {
'LambdaFunctionConfigurations':[
'LambdaFunctionConfigurations': [
{
'Id': self._make_notification_id(function.name),
'Events': [e for e in self._config['events']],
......@@ -141,7 +170,8 @@ class S3EventSource(EventSource):
]
}
try:
response = self._s3.put_bucket_notification_configuration(
response = self._s3.call(
'put_bucket_notification_configuration',
Bucket=self._get_bucket_name(),
NotificationConfiguration=notification_spec)
LOG.debug(response)
......@@ -154,7 +184,8 @@ class S3EventSource(EventSource):
def remove(self, function):
LOG.debug('removing s3 notification')
response = self._s3.get_bucket_notification(
response = self._s3.call(
'get_bucket_notification',
Bucket=self._get_bucket_name())
LOG.debug(response)
if 'CloudFunctionConfiguration' in response:
......@@ -162,14 +193,16 @@ class S3EventSource(EventSource):
if fn_arn == function.arn:
del response['CloudFunctionConfiguration']
del response['ResponseMetadata']
response = self._s3.put_bucket_notification(
response = self._s3.call(
'put_bucket_notification',
Bucket=self._get_bucket_name(),
NotificationConfiguration=response)
LOG.debug(response)
def status(self, function):
LOG.debug('status for s3 notification for %s', function.name)
response = self._s3.get_bucket_notification(
response = self._s3.call(
'get_bucket_notification',
Bucket=self._get_bucket_name())
LOG.debug(response)
if 'CloudFunctionConfiguration' not in response:
......@@ -181,15 +214,15 @@ class SNSEventSource(EventSource):
def __init__(self, context, config):
super(SNSEventSource, self).__init__(context, config)
aws = kappa.aws.get_aws(context)
self._sns = aws.create_client('sns')
self._sns = kappa.awsclient.create_client('sns', context.session)
def _make_notification_id(self, function_name):
return 'Kappa-%s-notification' % function_name
def exists(self, function):
try:
response = self._sns.list_subscriptions_by_topic(
response = self._sns.call(
'list_subscriptions_by_topic',
TopicArn=self.arn)
LOG.debug(response)
for subscription in response['Subscriptions']:
......@@ -201,7 +234,8 @@ class SNSEventSource(EventSource):
def add(self, function):
try:
response = self._sns.subscribe(
response = self._sns.call(
'subscribe',
TopicArn=self.arn, Protocol='lambda',
Endpoint=function.arn)
LOG.debug(response)
......@@ -216,7 +250,8 @@ class SNSEventSource(EventSource):
try:
subscription = self.exists(function)
if subscription:
response = self._sns.unsubscribe(
response = self._sns.call(
'unsubscribe',
SubscriptionArn=subscription['SubscriptionArn'])
LOG.debug(response)
except Exception:
......
This diff is collapsed. Click to expand it.
# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://aws.amazon.com/apache2.0/
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from botocore.exceptions import ClientError
import kappa.aws
import kappa.awsclient
LOG = logging.getLogger(__name__)
......@@ -25,12 +26,12 @@ class Log(object):
def __init__(self, context, log_group_name):
self._context = context
self.log_group_name = log_group_name
aws = kappa.aws.get_aws(self._context)
self._log_svc = aws.create_client('logs')
self._log_client = kappa.awsclient.create_client(
'logs', context.session)
def _check_for_log_group(self):
LOG.debug('checking for log group')
response = self._log_svc.describe_log_groups()
response = self._log_client.call('describe_log_groups')
log_group_names = [lg['logGroupName'] for lg in response['logGroups']]
return self.log_group_name in log_group_names
......@@ -40,7 +41,8 @@ class Log(object):
LOG.info(
'log group %s has not been created yet', self.log_group_name)
return []
response = self._log_svc.describe_log_streams(
response = self._log_client.call(
'describe_log_streams',
logGroupName=self.log_group_name)
LOG.debug(response)
return response['logStreams']
......@@ -58,7 +60,8 @@ class Log(object):
latest_stream = stream
elif stream['lastEventTimestamp'] > latest_stream['lastEventTimestamp']:
latest_stream = stream
response = self._log_svc.get_log_events(
response = self._log_client.call(
'get_log_events',
logGroupName=self.log_group_name,
logStreamName=latest_stream['logStreamName'])
LOG.debug(response)
......@@ -66,7 +69,8 @@ class Log(object):
def delete(self):
try:
response = self._log_svc.delete_log_group(
response = self._log_client.call(
'delete_log_group',
logGroupName=self.log_group_name)
LOG.debug(response)
except ClientError:
......
# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://aws.amazon.com/apache2.0/
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import json
import hashlib
import kappa.aws
import kappa.awsclient
LOG = logging.getLogger(__name__)
class Policy(object):
_path_prefix = '/kappa/'
def __init__(self, context, config):
self._context = context
self._config = config
aws = kappa.aws.get_aws(context)
self._iam_svc = aws.create_client('iam')
self._arn = None
self.context = context
self.config = config
self._iam_client = kappa.awsclient.create_client(
'iam', self.context.session)
self._arn = self.config['policy'].get('arn', None)
@property
def name(self):
return self._config['name']
return '{}_{}'.format(self.context.name, self.context.environment)
@property
def description(self):
return self._config.get('description', None)
return 'A kappa policy to control access to {} resources'.format(
self.context.environment)
@property
def document(self):
return self._config.get('document', None)
@property
def path(self):
return self._config.get('path', '/kappa/')
if ('resources' not in self.config['policy'] and
'statements' not in self.config['policy']):
return None
document = {'Version': '2012-10-17'}
statements = []
document['Statement'] = statements
for resource in self.config['policy']['resources']:
arn = resource['arn']
_, _, service, _ = arn.split(':', 3)
statement = {"Effect": "Allow",
"Resource": resource['arn']}
actions = []
for action in resource['actions']:
actions.append("{}:{}".format(service, action))
statement['Action'] = actions
statements.append(statement)
for statement in self.config['policy'].get('statements', []):
statements.append(statement)
return json.dumps(document, indent=2, sort_keys=True)
@property
def arn(self):
......@@ -52,20 +71,23 @@ class Policy(object):
return self._arn
def _find_all_policies(self):
# boto3 does not currently do pagination
# so we have to do it ourselves
policies = []
try:
response = self._iam_svc.list_policies()
policies += response['Policies']
while response['IsTruncated']:
LOG.debug('getting another page of policies')
response = self._iam_svc.list_policies(
Marker=response['Marker'])
policies += response['Policies']
response = self._iam_client.call(
'list_policies', PathPrefix=self._path_prefix)
except Exception:
LOG.exception('Error listing policies')
return policies
response = {}
return response.get('Policies', list())
def _list_versions(self):
try:
response = self._iam_client.call(
'list_policy_versions',
PolicyArn=self.arn)
except Exception:
LOG.exception('Error listing policy versions')
response = {}
return response.get('Versions', list())
def exists(self):
for policy in self._find_all_policies():
......@@ -73,27 +95,91 @@ class Policy(object):
return policy
return None
def create(self):
LOG.debug('creating policy %s', self.name)
def _add_policy_version(self):
document = self.document()
if not document:
LOG.debug('not a custom policy, no need to version it')
return
versions = self._list_versions()
if len(versions) == 5:
try:
response = self._iam_client.call(
'delete_policy_version',
PolicyArn=self.arn,
VersionId=versions[-1]['VersionId'])
except Exception:
LOG.exception('Unable to delete policy version')
# update policy with a new version here
try:
response = self._iam_client.call(
'create_policy_version',
PolicyArn=self.arn,
PolicyDocument=document,
SetAsDefault=True)
LOG.debug(response)
except Exception:
LOG.exception('Error creating new Policy version')
def _check_md5(self, document):
m = hashlib.md5()
m.update(document.encode('utf-8'))
policy_md5 = m.hexdigest()
cached_md5 = self.context.get_cache_value('policy_md5')
LOG.debug('policy_md5: %s', policy_md5)
LOG.debug('cached md5: %s', cached_md5)
if policy_md5 != cached_md5:
self.context.set_cache_value('policy_md5', policy_md5)
return True
return False
def deploy(self):
LOG.info('deploying policy %s', self.name)
document = self.document()
if not document:
LOG.info('not a custom policy, no need to create it')
return
policy = self.exists()
if not policy and self.document:
with open(self.document, 'rb') as fp:
try:
response = self._iam_svc.create_policy(
Path=self.path, PolicyName=self.name,
PolicyDocument=fp.read(),
Description=self.description)
LOG.debug(response)
except Exception:
LOG.exception('Error creating Policy')
if policy:
if self._check_md5(document):
self._add_policy_version()
else:
LOG.info('policy unchanged')
else:
# create a new policy
self._check_md5(document)
try:
response = self._iam_client.call(
'create_policy',
Path=self._path_prefix, PolicyName=self.name,
PolicyDocument=document,
Description=self.description)
LOG.debug(response)
except Exception:
LOG.exception('Error creating Policy')
def delete(self):
response = None
# Only delete the policy if it has a document associated with it.
# This indicates that it was a custom policy created by kappa.
if self.arn and self.document:
LOG.debug('deleting policy %s', self.name)
response = self._iam_svc.delete_policy(PolicyArn=self.arn)
document = self.document()
if self.arn and document:
LOG.info('deleting policy %s', self.name)
LOG.info('deleting all policy versions for %s', self.name)
versions = self._list_versions()
for version in versions:
LOG.debug('deleting version %s', version['VersionId'])
if not version['IsDefaultVersion']:
try:
response = self._iam_client.call(
'delete_policy_version',
PolicyArn=self.arn,
VersionId=version['VersionId'])
except Exception:
LOG.exception('Unable to delete policy version %s',
version['VersionId'])
LOG.debug('now delete policy')
response = self._iam_client.call(
'delete_policy', PolicyArn=self.arn)
LOG.debug(response)
return response
......
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from botocore.exceptions import ClientError
import kappa.awsclient
import kappa.log
LOG = logging.getLogger(__name__)
class RestApi(object):
def __init__(self, context, config):
self._context = context
self._config = config
self._apigateway_client = kappa.awsclient.create_client(
'apigateway', context.session)
self._api = None
self._resources = None
self._resource = None
@property
def arn(self):
_, _, _, region, account, _ = self._context.function.arn.split(':', 5)
arn = 'arn:aws:execute-api:{}:{}:{}/*/*/{}'.format(
region, account, self.api_id, self.resource_name)
return arn
@property
def api_name(self):
return self._config['name']
@property
def description(self):
return self._config['description']
@property
def resource_name(self):
return self._config['resource']['name']
@property
def parent_resource(self):
return self._config['resource']['parent']
@property
def full_path(self):
parts = self.parent_resource.split('/')
parts.append(self.resource_name)
return '/'.join(parts)
@property
def api_id(self):
api = self._get_api()
return api.get('id')
@property
def resource_id(self):
resources = self._get_resources()
return resources.get(self.full_path).get('id')
def _get_api(self):
if self._api is None:
try:
response = self._apigateway_client.call(
'get_rest_apis')
LOG.debug(response)
for item in response['items']:
if item['name'] == self.api_name:
self._api = item
except Exception:
LOG.exception('Error finding restapi')
return self._api
def _get_resources(self):
if self._resources is None:
try:
response = self._apigateway_client.call(
'get_resources',
restApiId=self.api_id)
LOG.debug(response)
self._resources = {}
for item in response['items']:
self._resources[item['path']] = item
except Exception:
LOG.exception('Unable to find resources for: %s',
self.api_name)
return self._resources
def create_restapi(self):
if not self.api_exists():
LOG.info('creating restapi %s', self.api_name)
try:
response = self._apigateway_client.call(
'create_rest_api',
name=self.api_name,
description=self.description)
LOG.debug(response)
except Exception:
LOG.exception('Unable to create new restapi')
def create_resource_path(self):
path = self.full_path
parts = path.split('/')
resources = self._get_resources()
parent = None
build_path = []
for part in parts:
LOG.debug('part=%s', part)
build_path.append(part)
LOG.debug('build_path=%s', build_path)
full_path = '/'.join(build_path)
LOG.debug('full_path=%s', full_path)
if full_path is '':
parent = resources['/']
else:
if full_path not in resources and parent:
try:
response = self._apigateway_client.call(
'create_resource',
restApiId=self.api_id,
parentId=parent['id'],
pathPart=part)
LOG.debug(response)
resources[full_path] = response
except Exception:
LOG.exception('Unable to create new resource')
parent = resources[full_path]
self._item = resources[path]
def create_method(self, method, config):
LOG.info('creating method: %s', method)
try:
response = self._apigateway_client.call(
'put_method',
restApiId=self.api_id,
resourceId=self.resource_id,
httpMethod=method,
authorizationType=config.get('authorization_type'),
apiKeyRequired=config.get('apikey_required', False)
)
LOG.debug(response)
LOG.debug('now create integration')
uri = 'arn:aws:apigateway:{}:'.format(
self._apigateway_client.region_name)
uri += 'lambda:path/2015-03-31/functions/'
uri += self._context.function.arn
uri += ':${stageVariables.environment}/invocations'
LOG.debug(uri)
response = self._apigateway_client.call(
'put_integration',
restApiId=self.api_id,
resourceId=self.resource_id,
httpMethod=method,
integrationHttpMethod=method,
type='AWS',
uri=uri
)
except Exception:
LOG.exception('Unable to create integration: %s', method)
def create_deployment(self):
LOG.info('creating a deployment for %s to stage: %s',
self.api_name, self._context.environment)
try:
response = self._apigateway_client.call(
'create_deployment',
restApiId=self.api_id,
stageName=self._context.environment
)
LOG.debug(response)
LOG.info('Now deployed to: %s', self.deployment_uri)
except Exception:
LOG.exception('Unable to create a deployment')
def create_methods(self):
resource_config = self._config['resource']
for method in resource_config.get('methods', dict()):
if not self.method_exists(method):
method_config = resource_config['methods'][method]
self.create_method(method, method_config)
def api_exists(self):
return self._get_api()
def resource_exists(self):
resources = self._get_resources()
return resources.get(self.full_path)
def method_exists(self, method):
exists = False
resource = self.resource_exists()
if resource:
methods = resource.get('resourceMethods')
if methods:
for method_name in methods:
if method_name == method:
exists = True
return exists
def find_parent_resource_id(self):
parent_id = None
resources = self._get_resources()
for item in resources:
if item['path'] == self.parent:
parent_id = item['id']
return parent_id
def api_update(self):
LOG.info('updating restapi %s', self.api_name)
def resource_update(self):
LOG.info('updating resource %s', self.full_path)
def add_permission(self):
LOG.info('Adding permission for APIGateway to call function')
self._context.function.add_permission(
action='lambda:InvokeFunction',
principal='apigateway.amazonaws.com',
source_arn=self.arn)
def deploy(self):
if self.api_exists():
self.api_update()
else:
self.create_restapi()
if self.resource_exists():
self.resource_update()
else:
self.create_resource_path()
self.create_methods()
self.add_permission()
def delete(self):
LOG.info('deleting resource %s', self.resource_name)
try:
response = self._apigateway_client.call(
'delete_resource',
restApiId=self.api_id,
resourceId=self.resource_id)
LOG.debug(response)
except ClientError:
LOG.exception('Unable to delete resource %s', self.resource_name)
return response
def status(self):
try:
response = self._apigateway_client.call(
'delete_',
FunctionName=self.name)
LOG.debug(response)
except ClientError:
LOG.exception('function %s not found', self.name)
response = None
return response
# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://aws.amazon.com/apache2.0/
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from botocore.exceptions import ClientError
import kappa.aws
import kappa.awsclient
LOG = logging.getLogger(__name__)
......@@ -39,20 +40,20 @@ class Role(object):
def __init__(self, context, config):
self._context = context
self._config = config
aws = kappa.aws.get_aws(context)
self._iam_svc = aws.create_client('iam')
self._iam_client = kappa.awsclient.create_client(
'iam', context.session)
self._arn = None
@property
def name(self):
return self._config['name']
return '{}_{}'.format(self._context.name, self._context.environment)
@property
def arn(self):
if self._arn is None:
try:
response = self._iam_svc.get_role(
RoleName=self.name)
response = self._iam_client.call(
'get_role', RoleName=self.name)
LOG.debug(response)
self._arn = response['Role']['Arn']
except Exception:
......@@ -60,20 +61,12 @@ class Role(object):
return self._arn
def _find_all_roles(self):
# boto3 does not currently do pagination
# so we have to do it ourselves
roles = []
try:
response = self._iam_svc.list_roles()
roles += response['Roles']
while response['IsTruncated']:
LOG.debug('getting another page of roles')
response = self._iam_svc.list_roles(
Marker=response['Marker'])
roles += response['Roles']
response = self._iam_client.call('list_roles')
except Exception:
LOG.exception('Error listing roles')
return roles
response = {}
return response.get('Roles', list())
def exists(self):
for role in self._find_all_roles():
......@@ -82,22 +75,26 @@ class Role(object):
return None
def create(self):
LOG.debug('creating role %s', self.name)
LOG.info('creating role %s', self.name)
role = self.exists()
if not role:
try:
response = self._iam_svc.create_role(
response = self._iam_client.call(
'create_role',
Path=self.Path, RoleName=self.name,
AssumeRolePolicyDocument=AssumeRolePolicyDocument)
LOG.debug(response)
if self._context.policy:
LOG.debug('attaching policy %s', self._context.policy.arn)
response = self._iam_svc.attach_role_policy(
response = self._iam_client.call(
'attach_role_policy',
RoleName=self.name,
PolicyArn=self._context.policy.arn)
LOG.debug(response)
except ClientError:
LOG.exception('Error creating Role')
else:
LOG.info('role already exists')
def delete(self):
response = None
......@@ -106,10 +103,12 @@ class Role(object):
LOG.debug('First detach the policy from the role')
policy_arn = self._context.policy.arn
if policy_arn:
response = self._iam_svc.detach_role_policy(
response = self._iam_client.call(
'detach_role_policy',
RoleName=self.name, PolicyArn=policy_arn)
LOG.debug(response)
response = self._iam_svc.delete_role(RoleName=self.name)
response = self._iam_client.call(
'delete_role', RoleName=self.name)
LOG.debug(response)
except ClientError:
LOG.exception('role %s not found', self.name)
......@@ -118,7 +117,8 @@ class Role(object):
def status(self):
LOG.debug('getting status for role %s', self.name)
try:
response = self._iam_svc.get_role(RoleName=self.name)
response = self._iam_client.call(
'get_role', RoleName=self.name)
LOG.debug(response)
except ClientError:
LOG.debug('role %s not found', self.name)
......
# Copyright (c) 2014, 2015 Mitch Garnaat
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python
# Copyright (c) 2014, 2015 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from datetime import datetime
import base64
import click
from kappa.context import Context
pass_ctx = click.make_pass_decorator(Context)
@click.group()
@click.option(
'--config',
default='kappa.yml',
type=click.File('rb'),
envvar='KAPPA_CONFIG',
help='Name of config file (default is kappa.yml)'
)
@click.option(
'--debug/--no-debug',
default=False,
help='Turn on debugging output'
)
@click.option(
'--env',
default='dev',
help='Specify which environment to work with (default dev)'
)
@click.option(
'--record-path',
type=click.Path(exists=True, file_okay=False, writable=True),
help='Uses placebo to record AWS responses to this path'
)
@click.pass_context
def cli(ctx, config=None, debug=False, env=None, record_path=None):
ctx.obj = Context(config, env, debug, record_path)
@cli.command()
@pass_ctx
def deploy(ctx):
"""Deploy the Lambda function and any policies and roles required"""
click.echo('deploying')
ctx.deploy()
click.echo('done')
@cli.command()
@click.argument('data_file', type=click.File('r'))
@pass_ctx
def invoke(ctx, data_file):
"""Invoke the command synchronously"""
click.echo('invoking')
response = ctx.invoke(data_file.read())
log_data = base64.b64decode(response['LogResult'])
click.echo(log_data)
click.echo('Response:')
click.echo(response['Payload'].read())
click.echo('done')
@cli.command()
@pass_ctx
def test(ctx):
"""Test the command synchronously"""
click.echo('testing')
ctx.test()
click.echo('done')
@cli.command()
@pass_ctx
def tail(ctx):
"""Show the last 10 lines of the log file"""
click.echo('tailing logs')
for e in ctx.tail()[-10:]:
ts = datetime.utcfromtimestamp(e['timestamp']//1000).isoformat()
click.echo("{}: {}".format(ts, e['message']))
click.echo('done')
@cli.command()
@pass_ctx
def status(ctx):
"""Print a status of this Lambda function"""
status = ctx.status()
click.echo(click.style('Policy', bold=True))
if status['policy']:
line = ' {} ({})'.format(
status['policy']['PolicyName'],
status['policy']['Arn'])
click.echo(click.style(line, fg='green'))
click.echo(click.style('Role', bold=True))
if status['role']:
line = ' {} ({})'.format(
status['role']['Role']['RoleName'],
status['role']['Role']['Arn'])
click.echo(click.style(line, fg='green'))
click.echo(click.style('Function', bold=True))
if status['function']:
line = ' {} ({})'.format(
status['function']['Configuration']['FunctionName'],
status['function']['Configuration']['FunctionArn'])
click.echo(click.style(line, fg='green'))
else:
click.echo(click.style(' None', fg='green'))
click.echo(click.style('Event Sources', bold=True))
if status['event_sources']:
for event_source in status['event_sources']:
if event_source:
line = ' {}: {}'.format(
event_source['EventSourceArn'], event_source['State'])
click.echo(click.style(line, fg='green'))
else:
click.echo(click.style(' None', fg='green'))
@cli.command()
@pass_ctx
def delete(ctx):
"""Delete the Lambda function and related policies and roles"""
click.echo('deleting')
ctx.delete()
click.echo('done')
@cli.command()
@click.argument('command',
type=click.Choice(['list', 'enable', 'disable']))
@pass_ctx
def event_sources(ctx, command):
"""List, enable, and disable event sources specified in the config file"""
if command == 'list':
click.echo('listing event sources')
event_sources = ctx.list_event_sources()
for es in event_sources:
click.echo('arn: {}'.format(es['arn']))
click.echo('starting position: {}'.format(es['starting_position']))
click.echo('batch size: {}'.format(es['batch_size']))
click.echo('enabled: {}'.format(es['enabled']))
click.echo('done')
elif command == 'enable':
click.echo('enabling event sources')
ctx.enable_event_sources()
click.echo('done')
elif command == 'disable':
click.echo('enabling event sources')
ctx.disable_event_sources()
click.echo('done')
boto3==1.1.1
click==4.0
boto3>=1.2.3
placebo>=0.8.1
click==5.1
PyYAML>=3.11
mock>=1.0.1
nose==1.3.1
......
.kappa/
kappa.yml
A Simple Python Example
=======================
In this Python example, we will build a Lambda function that can be hooked up
to methods in API Gateway to provide a simple CRUD REST API that persists JSON
objects in DynamoDB.
To implement this, we will create a single Lambda function that will be
associated with the GET, POST, PUT, and DELETE HTTP methods of a single API
Gateway resource. We will show the API Gateway connections later. For now, we
will focus on our Lambda function.
Installing Dependencies
-----------------------
Put all dependencies in the `requirements.txt` file in this directory and then
run the following command to install them in this directory prior to uploading
the code.
$ pip install -r requirements.txt -t /full/path/to/this/code
This will install all of the dependencies inside the code directory so they can
be bundled with your own code and deployed to Lambda.
The ``setup.cfg`` file in this directory is required if you are running on
MacOS and are using brew. It may not be needed on other platforms.
The Code Is Here!
=================
At the moment, the contents of this directory are created by hand but when
LambdaPI is complete, the basic framework would be created for you. You would
have a Python source file that works but doesn't actually do anything. And the
config.json file here would be created on the fly at deployment time. The
correct resource names and other variables would be written into the config
file and then then config file would get bundled up with the code. You can
then load the config file at run time in the Lambda Python code so you don't
have to hardcode resource names in your code.
Installing Dependencies
-----------------------
Put all dependencies in the `requirements.txt` file in this directory and then
run the following command to install them in this directory prior to uploading
the code.
$ pip install -r requirements.txt -t /full/path/to/this/code
This will install all of the dependencies inside the code directory so they can
be bundled with your own code and deployed to Lambda.
The ``setup.cfg`` file in this directory is required if you are running on
MacOS and are using brew. It may not be needed on other platforms.
{
"region_name": "us-west-2",
"sample_table": "kappa-python-sample"
}
{
"region_name": "us-west-2",
"sample_table": "kappa-python-sample"
}
git+ssh://git@github.com/garnaat/petard.git
import logging
import json
import uuid
import boto3
LOG = logging.getLogger()
LOG.setLevel(logging.INFO)
# The kappa deploy command will make sure that the right config file
# for this environment is available in the local directory.
config = json.load(open('config.json'))
session = boto3.Session(region_name=config['region_name'])
ddb_client = session.resource('dynamodb')
table = ddb_client.Table(config['sample_table'])
def foobar():
return 42
def _get(event, context):
customer_id = event.get('id')
if customer_id is None:
raise Exception('No id provided for GET operation')
response = table.get_item(Key={'id': customer_id})
item = response.get('Item')
if item is None:
raise Exception('id: {} not found'.format(customer_id))
return response['Item']
def _post(event, context):
item = event['json_body']
if item is None:
raise Exception('No json_body found in event')
item['id'] = str(uuid.uuid4())
table.put_item(Item=item)
return item
def _put(event, context):
data = _get(event, context)
id_ = data.get('id')
data.update(event['json_body'])
# don't allow the id to be changed
data['id'] = id_
table.put_item(Item=data)
return data
def handler(event, context):
LOG.info(event)
http_method = event.get('http_method')
if not http_method:
return 'NoHttpMethodSupplied'
if http_method == 'GET':
return _get(event, context)
elif http_method == 'POST':
return _post(event, context)
elif http_method == 'PUT':
return _put(event, context)
elif http_method == 'DELETE':
return _put(event, context)
else:
raise Exception('UnsupportedMethod: {}'.format(http_method))
{
"http_method": "GET",
"id": "4a407fc2-da7a-41e9-8dc6-8a057b6b767a"
}
{
"http_method": "POST",
"json_body": {
"foo": "This is the foo value",
"bar": "This is the bar value"
}
}
import unittest
import simple
class TestSimple(unittest.TestCase):
def test_foobar(self):
self.assertEqual(simple.foobar(), 42)
---
name: kappa-python-sample
environments:
dev:
profile: <your dev profile>
region: <your dev region e.g. us-west-2>
policy:
resources:
- arn: arn:aws:dynamodb:us-west-2:123456789012:table/kappa-python-sample
actions:
- "*"
- arn: arn:aws:logs:*:*:*
actions:
- "*"
prod:
profile: <your prod profile>
region: <your prod region e.g. us-west-2>
policy_resources:
- arn: arn:aws:dynamodb:us-west-2:234567890123:table/kappa-python-sample
actions:
- "*"
- arn: arn:aws:logs:*:*:*
actions:
- "*"
lambda:
description: A simple Python sample
handler: simple.handler
runtime: python2.7
memory_size: 256
timeout: 3
\ No newline at end of file
.kappa/
kappa.yml
*.zip
The Code Is Here!
=================
Installing Dependencies
-----------------------
Put all dependencies in the `requirements.txt` file in this directory and then
run the following command to install them in this directory prior to uploading
the code.
$ pip install -r requirements.txt -t /full/path/to/this/code
This will install all of the dependencies inside the code directory so they can
be bundled with your own code and deployed to Lambda.
The ``setup.cfg`` file in this directory is required if you are running on
MacOS and are using brew. It may not be needed on other platforms.
import logging
LOG = logging.getLogger()
LOG.setLevel(logging.DEBUG)
def handler(event, context):
LOG.debug(event)
return {'status': 'success'}
{
"foo": "bar",
"fie": "baz"
}
---
name: kappa-simple
environments:
dev:
profile: <your profile here>
region: <your region here>
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
prod:
profile: <your profile here>
region: <your region here>
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
lambda:
description: A very simple Kappa example
handler: simple.handler
runtime: python2.7
memory_size: 128
timeout: 3
\ No newline at end of file
......@@ -5,8 +5,9 @@ from setuptools import setup, find_packages
import os
requires = [
'boto3==1.1.1',
'click==4.0',
'boto3>=1.2.2',
'placebo>=0.4.1',
'click>=5.0',
'PyYAML>=3.11'
]
......@@ -22,7 +23,10 @@ setup(
packages=find_packages(exclude=['tests*']),
package_data={'kappa': ['_version']},
package_dir={'kappa': 'kappa'},
scripts=['bin/kappa'],
entry_points="""
[console_scripts]
kappa=kappa.scripts.cli:cli
""",
install_requires=requires,
license=open("LICENSE").read(),
classifiers=(
......@@ -32,10 +36,10 @@ setup(
'Natural Language :: English',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4'
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5'
),
)
......
[foobar]
aws_access_key_id = foo
aws_secret_access_key = bar
{
"Statement":[
{"Condition":
{"ArnLike":{"AWS:SourceArn":"arn:aws:sns:us-east-1:123456789012:lambda_topic"}},
"Resource":"arn:aws:lambda:us-east-1:123456789023:function:messageStore",
"Action":"lambda:invokeFunction",
"Principal":{"Service":"sns.amazonaws.com"},
"Sid":"sns invoke","Effect":"Allow"
}],
"Id":"default",
"Version":"2012-10-17"
}
dev: {config_md5: 3ccd0a5630fa4e0d0effeb9de0b551a3, policy_md5: 12273b7917929c02cfc755f4700e1e2b,
zip_md5: b6605fd4990542106fa95b62ea62d70e}
def handler(event, context):
return {'status': 'success'}
No preview for this file type
---
name: kappa-simple
environments:
dev:
profile: foobar
region: us-west-2
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
lambda:
description: Foo the Bar
handler: simple.handler
runtime: python2.7
memory_size: 256
timeout: 3
import inspect
import mock
import tests.unit.responses as responses
class MockAWS(object):
def __init__(self, profile=None, region=None):
self.response_map = {}
for name, value in inspect.getmembers(responses):
if name.startswith('__'):
continue
if '_' in name:
service_name, request_name = name.split('_', 1)
if service_name not in self.response_map:
self.response_map[service_name] = {}
self.response_map[service_name][request_name] = value
def create_client(self, client_name):
client = None
if client_name in self.response_map:
client = mock.Mock()
for request in self.response_map[client_name]:
response = self.response_map[client_name][request]
setattr(client, request, mock.Mock(side_effect=response))
return client
def get_aws(context):
return MockAWS()
This diff is collapsed. Click to expand it.
{
"status_code": 200,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1276680a-a219-11e5-8386-d3391e1d709e"
}
}
}
\ No newline at end of file
{
"status_code": 200,
"data": {
"Policy": {
"PolicyName": "kappa-simple_dev",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 302000,
"year": 2015,
"day": 14,
"minute": 13
},
"AttachmentCount": 0,
"IsAttachable": true,
"PolicyId": "ANPAJ6USPUIU5QKQ7DWMG",
"DefaultVersionId": "v1",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:policy/kappa/kappa-simple_dev",
"UpdateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 302000,
"year": 2015,
"day": 14,
"minute": 13
}
},
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "11cdf3d8-a219-11e5-a392-d5ea3c3fc695"
}
}
}
{
"status_code": 200,
"data": {
"Role": {
"AssumeRolePolicyDocument": "%7B%0A%20%20%20%20%22Version%22%20%3A%20%222012-10-17%22%2C%0A%20%20%20%20%22Statement%22%3A%20%5B%20%7B%0A%20%20%20%20%20%20%20%20%22Effect%22%3A%20%22Allow%22%2C%0A%20%20%20%20%20%20%20%20%22Principal%22%3A%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%22Service%22%3A%20%5B%20%22lambda.amazonaws.com%22%20%5D%0A%20%20%20%20%20%20%20%20%7D%2C%0A%20%20%20%20%20%20%20%20%22Action%22%3A%20%5B%20%22sts%3AAssumeRole%22%20%5D%0A%20%20%20%20%7D%20%5D%0A%7D",
"RoleId": "AROAICWPJDQLUTEOHRQZO",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 988000,
"year": 2015,
"day": 14,
"minute": 13
},
"RoleName": "kappa-simple_dev",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev"
},
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "123d5777-a219-11e5-8386-d3391e1d709e"
}
}
}
{
"status_code": 200,
"data": {
"Role": {
"AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D",
"RoleId": "AROAICWPJDQLUTEOHRQZO",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
},
"RoleName": "kappa-simple_dev",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev"
},
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "12dca49a-a219-11e5-9912-d70327f9be2c"
}
}
}
{
"status_code": 200,
"data": {
"Role": {
"AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D",
"RoleId": "AROAICWPJDQLUTEOHRQZO",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
},
"RoleName": "kappa-simple_dev",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev"
},
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1bd39022-a219-11e5-bb1e-6b18bfdcba09"
}
}
}
This diff is collapsed. Click to expand it.
{
"status_code": 200,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1264405a-a219-11e5-ad54-c769aa17a0a1"
},
"IsTruncated": false,
"Policies": [
{
"PolicyName": "kappa-simple_dev",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
},
"AttachmentCount": 0,
"IsAttachable": true,
"PolicyId": "ANPAJ6USPUIU5QKQ7DWMG",
"DefaultVersionId": "v1",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:policy/kappa/kappa-simple_dev",
"UpdateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
}
},
{
"PolicyName": "FooBar15",
"CreateDate": {
"hour": 19,
"__class__": "datetime",
"month": 12,
"second": 15,
"microsecond": 0,
"year": 2015,
"day": 10,
"minute": 22
},
"AttachmentCount": 1,
"IsAttachable": true,
"PolicyId": "ANPAJ3MM445EFVC6OWPIO",
"DefaultVersionId": "v1",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:policy/kappa/FooBar15",
"UpdateDate": {
"hour": 19,
"__class__": "datetime",
"month": 12,
"second": 15,
"microsecond": 0,
"year": 2015,
"day": 10,
"minute": 22
}
}
]
}
}
{
"status_code": 200,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1b40516e-a219-11e5-bb1e-6b18bfdcba09"
},
"IsTruncated": false,
"Policies": [
{
"PolicyName": "kappa-simple_dev",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
},
"AttachmentCount": 1,
"IsAttachable": true,
"PolicyId": "ANPAJ6USPUIU5QKQ7DWMG",
"DefaultVersionId": "v1",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:policy/kappa/kappa-simple_dev",
"UpdateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
}
},
{
"PolicyName": "FooBar15",
"CreateDate": {
"hour": 19,
"__class__": "datetime",
"month": 12,
"second": 15,
"microsecond": 0,
"year": 2015,
"day": 10,
"minute": 22
},
"AttachmentCount": 1,
"IsAttachable": true,
"PolicyId": "ANPAJ3MM445EFVC6OWPIO",
"DefaultVersionId": "v1",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:policy/kappa/FooBar15",
"UpdateDate": {
"hour": 19,
"__class__": "datetime",
"month": 12,
"second": 15,
"microsecond": 0,
"year": 2015,
"day": 10,
"minute": 22
}
}
]
}
}
{
"status_code": 200,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "120be6dd-a219-11e5-ad54-c769aa17a0a1"
},
"IsTruncated": false,
"Roles": [
{
"AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D",
"RoleId": "AROAJC6I44KNC2N4C6DUO",
"CreateDate": {
"hour": 13,
"__class__": "datetime",
"month": 8,
"second": 29,
"microsecond": 0,
"year": 2015,
"day": 12,
"minute": 10
},
"RoleName": "FooBar1",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:role/kappa/FooBar1"
},
{
"AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22AWS%22%3A%22arn%3Aaws%3Aiam%3A%3A433502988969%3Aroot%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D",
"RoleId": "AROAIPICAZWCWSIUY6WBC",
"CreateDate": {
"hour": 6,
"__class__": "datetime",
"month": 5,
"second": 3,
"microsecond": 0,
"year": 2015,
"day": 5,
"minute": 31
},
"RoleName": "FooBar2",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/FooBar2"
}
]
}
}
{
"status_code": 200,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1b6a1fab-a219-11e5-bb1e-6b18bfdcba09"
},
"IsTruncated": false,
"Roles": [
{
"AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D",
"RoleId": "AROAICWPJDQLUTEOHRQZO",
"CreateDate": {
"hour": 4,
"__class__": "datetime",
"month": 12,
"second": 46,
"microsecond": 0,
"year": 2015,
"day": 14,
"minute": 13
},
"RoleName": "kappa-simple_dev",
"Path": "/kappa/",
"Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev"
},
{
"AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22AWS%22%3A%22arn%3Aaws%3Aiam%3A%3A123456789012%3Aroot%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%2C%22Condition%22%3A%7B%22StringEquals%22%3A%7B%22sts%3AExternalId%22%3A%22c196gvft3%22%7D%7D%7D%5D%7D",
"RoleId": "AROAJGQVUYMCJZYCM3MR4",
"CreateDate": {
"hour": 15,
"__class__": "datetime",
"month": 6,
"second": 2,
"microsecond": 0,
"year": 2015,
"day": 12,
"minute": 53
},
"RoleName": "kate-test-policy-role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/kate-test-policy-role"
}
]
}
}
{
"status_code": 201,
"data": {
"AliasArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple:dev",
"FunctionVersion": "12",
"Name": "dev",
"ResponseMetadata": {
"HTTPStatusCode": 201,
"RequestId": "1872d8ff-a219-11e5-9579-ab6c3f6de03e"
},
"Description": "For stage dev"
}
}
{
"status_code": 400,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 400,
"RequestId": "12ed468e-a219-11e5-89fa-9b1d3e60e617"
},
"Error": {
"Message": "The role defined for the task cannot be assumed by Lambda.",
"Code": "InvalidParameterValueException"
}
}
}
\ No newline at end of file
{
"status_code": 400,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 400,
"RequestId": "14375279-a219-11e5-b9da-196ca0eccf24"
},
"Error": {
"Message": "The role defined for the task cannot be assumed by Lambda.",
"Code": "InvalidParameterValueException"
}
}
}
\ No newline at end of file
{
"status_code": 400,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 400,
"RequestId": "158815a1-a219-11e5-b354-111009c28f60"
},
"Error": {
"Message": "The role defined for the task cannot be assumed by Lambda.",
"Code": "InvalidParameterValueException"
}
}
}
\ No newline at end of file
{
"status_code": 400,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 400,
"RequestId": "16d88a59-a219-11e5-abfc-a3c6c8e4d88f"
},
"Error": {
"Message": "The role defined for the task cannot be assumed by Lambda.",
"Code": "InvalidParameterValueException"
}
}
}
\ No newline at end of file
{
"status_code": 201,
"data": {
"CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=",
"FunctionName": "kappa-simple",
"ResponseMetadata": {
"HTTPStatusCode": 201,
"RequestId": "1820256f-a219-11e5-acaa-ebe01320cf02"
},
"CodeSize": 948,
"MemorySize": 256,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple",
"Version": "12",
"Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev",
"Timeout": 3,
"LastModified": "2015-12-14T04:13:56.737+0000",
"Handler": "simple.handler",
"Runtime": "python2.7",
"Description": "A very simple Kappa example"
}
}
{
"status_code": 404,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 404,
"RequestId": "12caa276-a219-11e5-bc80-bb0600635952"
},
"Error": {
"Message": "Function not found: arn:aws:lambda:us-west-2:860421987956:function:kappa-simple",
"Code": "ResourceNotFoundException"
}
}
}
\ No newline at end of file
{
"status_code": 200,
"data": {
"Code": {
"RepositoryType": "S3",
"Location": "https://awslambda-us-west-2-tasks.s3-us-west-2.amazonaws.com/snapshots/123456789012/kappa-simple-99dba060-c458-48c6-ab7b-501063603e69?x-amz-security-token=AQoDYXdzECQa4AOvxYmkiVqa3ost0drsHs84f3tyUBYSVQUm%2BVvFZgAqx9JDt55l4N4T%2FwH8302pH0ICUZfCRRfc%2FuWtukJsT33XIsG6Xw0Br8w00y07RRpZYQLiJqTXi0i2EFZ6LMIRsGBgKV%2BdufXXu7P9yfzqBiFUrfUD6fYeRNLdv34aXUDto0G0gTj3ZDv9gqO9q7YEXbeu1NI62cIfuEGph2ptFj5V1E%2BijK0h9XEW0mkfuomQt6oeii%2FkkNNm5tEyUlpeX17z1sbX3NYoqJrap0QdoqXkak%2BFPvJQG7hm7eJ40b2ymve9L3gvIOiKNzmQrzay77uEkYDNLxK89QMlYRtRG6vTHppdZzTVIooTFVdA6NSSvYHnjryStLA3VUnDG%2FsL9xAiHH8l4kzq%2ByvatF%2Fg8wTNXOdFxt0VMVkJVbwG%2FUex7juyEcRAJUGNaHBZNLPJVUL%2BfAQljCwJAnjXxD%2FpjEtyLi9YbdfLGywkBKccoKh7AmjJXwzT8TusWNKmmW0XJL%2Fn81NE84Ni9iVB8JHxRbwaJXT2ou0ytwn%2BIIlRcmwXSIwA3xm%2FXynUTfOuXZ3UMGuBlHtt45uKGJvvp5d6RQicK5q5LXFQgGxj5gUqgty0jPhPE%2BN%2BF8WUwSk3eNwPiwMgwOS4swU%3D&AWSAccessKeyId=ASIAIHZZJVPM3RQS3QOQ&Expires=1450067042&Signature=QeC65kDb6N4CNRGn9IiQNBSpl4g%3D"
},
"Configuration": {
"Version": "$LATEST",
"CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=",
"FunctionName": "kappa-simple",
"MemorySize": 256,
"CodeSize": 948,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple",
"Handler": "simple.handler",
"Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev",
"Timeout": 3,
"LastModified": "2015-12-14T04:13:56.737+0000",
"Runtime": "python2.7",
"Description": "A very simple Kappa example"
},
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1bc69855-a219-11e5-990d-c158fa575e6a"
}
}
}
{
"status_code": 200,
"data": {
"ResponseMetadata": {
"HTTPStatusCode": 200,
"RequestId": "1860ff11-a219-11e5-b9da-196ca0eccf24"
},
"Versions": [
{
"Version": "$LATEST",
"CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=",
"FunctionName": "kappa-simple",
"MemorySize": 256,
"CodeSize": 948,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple:$LATEST",
"Handler": "simple.handler",
"Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev",
"Timeout": 3,
"LastModified": "2015-12-14T04:13:56.737+0000",
"Runtime": "python2.7",
"Description": "A very simple Kappa example"
},
{
"Version": "12",
"CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=",
"FunctionName": "kappa-simple",
"MemorySize": 256,
"CodeSize": 948,
"FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple:12",
"Handler": "simple.handler",
"Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev",
"Timeout": 3,
"LastModified": "2015-12-14T04:13:56.737+0000",
"Runtime": "python2.7",
"Description": "A very simple Kappa example"
}
]
}
}
# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import unittest
import os
import shutil
import mock
import placebo
import kappa.context
import kappa.awsclient
class TestLog(unittest.TestCase):
def setUp(self):
self.environ = {}
self.environ_patch = mock.patch('os.environ', self.environ)
self.environ_patch.start()
credential_path = os.path.join(os.path.dirname(__file__), 'cfg',
'aws_credentials')
self.environ['AWS_SHARED_CREDENTIALS_FILE'] = credential_path
self.prj_path = os.path.join(os.path.dirname(__file__), 'foobar')
cache_file = os.path.join(self.prj_path, '.kappa')
if os.path.exists(cache_file):
shutil.rmtree(cache_file)
self.data_path = os.path.join(os.path.dirname(__file__), 'responses')
self.data_path = os.path.join(self.data_path, 'deploy')
self.session = kappa.awsclient.create_session('foobar', 'us-west-2')
def tearDown(self):
pass
def test_deploy(self):
pill = placebo.attach(self.session, self.data_path)
pill.playback()
os.chdir(self.prj_path)
cfg_filepath = os.path.join(self.prj_path, 'kappa.yml')
cfg_fp = open(cfg_filepath)
ctx = kappa.context.Context(cfg_fp, 'dev')
ctx.deploy()
ctx.deploy()
# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import unittest
import mock
from kappa.log import Log
from tests.unit.mock_aws import get_aws
class TestLog(unittest.TestCase):
def setUp(self):
self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws)
self.mock_aws = self.aws_patch.start()
def tearDown(self):
self.aws_patch.stop()
def test_streams(self):
mock_context = mock.Mock()
log = Log(mock_context, 'foo/bar')
streams = log.streams()
self.assertEqual(len(streams), 6)
def test_tail(self):
mock_context = mock.Mock()
log = Log(mock_context, 'foo/bar')
events = log.tail()
self.assertEqual(len(events), 6)
self.assertEqual(events[0]['ingestionTime'], 1420569036909)
self.assertIn('RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770',
events[-1]['message'])
# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import unittest
import os
import mock
from kappa.policy import Policy
from tests.unit.mock_aws import get_aws
Config1 = {
'name': 'FooPolicy',
'description': 'This is the Foo policy',
'document': 'FooPolicy.json'}
Config2 = {
'name': 'BazPolicy',
'description': 'This is the Baz policy',
'document': 'BazPolicy.json'}
def path(filename):
return os.path.join(os.path.dirname(__file__), 'data', filename)
class TestPolicy(unittest.TestCase):
def setUp(self):
self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws)
self.mock_aws = self.aws_patch.start()
Config1['document'] = path(Config1['document'])
Config2['document'] = path(Config2['document'])
def tearDown(self):
self.aws_patch.stop()
def test_properties(self):
mock_context = mock.Mock()
policy = Policy(mock_context, Config1)
self.assertEqual(policy.name, Config1['name'])
self.assertEqual(policy.document, Config1['document'])
self.assertEqual(policy.description, Config1['description'])
def test_exists(self):
mock_context = mock.Mock()
policy = Policy(mock_context, Config1)
self.assertTrue(policy.exists())
def test_not_exists(self):
mock_context = mock.Mock()
policy = Policy(mock_context, Config2)
self.assertFalse(policy.exists())
def test_create(self):
mock_context = mock.Mock()
policy = Policy(mock_context, Config2)
policy.create()
def test_delete(self):
mock_context = mock.Mock()
policy = Policy(mock_context, Config1)
policy.delete()
# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import unittest
import mock
from kappa.role import Role
from tests.unit.mock_aws import get_aws
Config1 = {'name': 'FooRole'}
Config2 = {'name': 'BazRole'}
class TestRole(unittest.TestCase):
def setUp(self):
self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws)
self.mock_aws = self.aws_patch.start()
def tearDown(self):
self.aws_patch.stop()
def test_properties(self):
mock_context = mock.Mock()
role = Role(mock_context, Config1)
self.assertEqual(role.name, Config1['name'])
def test_exists(self):
mock_context = mock.Mock()
role = Role(mock_context, Config1)
self.assertTrue(role.exists())
def test_not_exists(self):
mock_context = mock.Mock()
role = Role(mock_context, Config2)
self.assertFalse(role.exists())
def test_create(self):
mock_context = mock.Mock()
role = Role(mock_context, Config2)
role.create()
def test_delete(self):
mock_context = mock.Mock()
role = Role(mock_context, Config1)
role.delete()