Toggle navigation
Toggle navigation
This project
Loading...
Sign in
서승완
/
kappa
Go to a project
Toggle navigation
Toggle navigation pinning
Projects
Groups
Snippets
Help
Project
Activity
Repository
Graphs
Network
Create a new issue
Commits
Issue Boards
Authored by
Mitch Garnaat
2015-12-08 18:35:16 -0500
Browse Files
Options
Browse Files
Download
Email Patches
Plain Diff
Commit
2d9681b45e68de55b4f7c6868337d33b5e1a11c3
2d9681b4
1 parent
c6b66d97
Update the README file.
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
151 additions
and
49 deletions
README.md
README.md
View file @
2d9681b
...
...
@@ -45,52 +45,154 @@ Or for the development version:
pip install git+https://github.com/garnaat/kappa.git
Getting Started
---------------
Kappa is a command line tool. The basic command format is:
kappa <path to config file> <command> [optional command args]
Where
``command``
is one of:
*
deploy - does whatever is required to deploy the current version of your
Lambda function such as creating/updating policies and roles and creating or
updating the function itself
*
delete - delete the Lambda function, remove any event sources, delete the IAM
policy and role
*
invoke - make a synchronous call to your Lambda function, passing test data
and display the resulting log data
*
invoke_async - make an asynchronous call to your Lambda function passing test
data.
*
dryrun - make the call but only check things like permissions and report
back. Don't actually run the code.
*
tail - display the most recent log events for the function (remember that it
can take several minutes before log events are available from CloudWatch)
*
add_event_sources - hook up an event source to your Lambda function
*
update_event_sources - Update the event sources based on the information in
your kappa config file
*
status - display summary information about functions, stacks, and event
sources related to your project.
The
``config file``
is a YAML format file containing all of the information
about your Lambda function.
If you use environment variables for your AWS credentials (as normally supported by boto),
simply exclude the
``profile``
element from the YAML file.
An example project based on a Kinesis stream can be found in
[
samples/kinesis
](
https://github.com/garnaat/kappa/tree/develop/samples/kinesis
)
.
The basic workflow is:
*
Create your Lambda function
*
Create any custom IAM policy you need to execute your Lambda function
*
Create some sample data
*
Create the YAML config file with all of the information
*
Run
``kappa <path-to-config> deploy``
to create roles and upload function
*
Run
``kappa <path-to-config> test``
to invoke the function with test data
*
Run
``kappa <path-to-config> deploy``
to upload new code for your Lambda
function
*
Run
``kappa <path-to-config> add_event_sources``
to hook your function up to the event source
*
Run
``kappa <path-to-config> tail``
to see more output
Quick Start
-----------
To get a feel for how kappa works, let's take a look at a very simple example
contained in the
``samples/simple``
directory of the kappa distribution. This
example is so simple, in fact, that it doesn't really do anything. It's just a
small Lambda function (written in Python) that accepts some JSON input, logs
that input to CloudWatch logs, and returns a JSON document back.
The structure of the directory is:
```
simple/
├── _src
│ ├── README.md
│ ├── requirements.txt
│ ├── setup.cfg
│ └── simple.py
├── _tests
│ └── test_one.json
└── kappa.yml.sample
```
Within the directory we see:
*
kappa.yml.sample which is a sample YAML configuration file for the project
*
_src which is a directory containing the source code for the Lambda function
*
_test which is a directory containing some test data
The first step is to make a copy of the sample configuration file:
$ cd simple
$ cp kappa.yml.simple kappa.yml
Now you will need to edit
``kappa.yml``
slightly for your use. The file looks
like this:
```
---
name: kappa-simple
environments:
dev:
profile: <your profile here>
region: <your region here>
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
prod:
profile: <your profile here>
region: <your region here>
policy:
resources:
- arn: arn:aws:logs:*:*:*
actions:
- "*"
lambda:
description: A very simple Kappa example
handler: simple.handler
runtime: python2.7
memory_size: 128
timeout: 3
```
The
``name``
at the top is just a name used for this Lambda function and other
things we create that are related to this Lambda function (e.g. roles,
policies, etc.).
The
``environments``
section is where we define the different environments into
which we wish to deploy this Lambda function. Each environment is identified
by a
``profile``
(as used in the AWS CLI and other AWS tools) and a
``region``
. You can define as many environments as you wish but each
invocation of
``kappa``
will deal with a single environment. Each environment
section also includes a
``policy``
section. This is where we tell kappa about
AWS resources that our Lambda function needs access to and what kind of access
it requires. For example, your Lambda function may need to read from an SNS
topic or write to a DynamoDB table and this is where you would provide the ARN
(
[
Amazon Resource Name
](
http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
)
)
that identify those resources. Since this is a very simple example, the only
resource listed here is for CloudWatch logs so that our Lambda function is able
to write to the CloudWatch log group that will be created for it automatically
by AWS Lambda.
The
``lambda``
section contains the configuration information about our Lambda
function. These values are passed to Lambda when we create the function and
can be updated at any time after.
To modify this for your own use, you just need to put in the right values for
``profile``
and
``region``
in one of the environment sections. You can also
change the names of the environments to be whatever you like but the name
``dev``
is the default value used by kappa so it's kind of handy to avoid
typing.
Once you have made the necessary modifications, you should be ready to deploy
your Lambda function to the AWS Lambda service. To do so, just do this:
```
$ kappa deploy
```
This assumes you want to deploy the default environment called
``dev``
and that
you have named your config file
``kappa.yml``
. If, instead, you called your
environment
``test``
and named your config file foo.yml, you would do this:
```
$ kappa --env test --config foo.yml deploy
```
In either case, you should see output that looks something like this:
```
$ kappa deploy
deploying
...deploying policy kappa-simple-dev
...creating function kappa-simple-dev
done
$
```
So, what kappa has done is it has created a new Managed Policy called
``kappa-simple-dev``
that grants access to the CloudWatch Logs service. It has
also created an IAM role called
``kappa-simple-dev``
that uses that policy.
And finally it has zipped up our Python code and created a function in AWS
Lambda called kappa-simple-dev.
To test this out, try this:
```
$ kappa invoke _tests/test_one.json
invoking
START RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Version: $LATEST
[DEBUG] 2015-12-08T22:00:15.363Z 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f {u'foo': u'bar', u'fie': u'baz'}
END RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f
REPORT RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Duration: 0.40 ms Billed Duration: 100 ms Memory Size: 256 MB Max Memory Used: 23 MB
Response:
{"status": "success"}
done
$
```
We have just called our Lambda function, passing in the contents of the file
``_tests/test_one.json``
as input to our function. We can see the output of
the CloudWatch logs for the call and we can see the logging call in the Python
function that prints out the
``event``
(the data) passed to the function. And
finally, we can see the Response from the function which, for now, is just a
hard-coded data structure returned by the function.
That gives you a quick overview of kappa. To learn more about it, I recommend
you check out the tutorial.
...
...
Please
register
or
login
to post a comment