Merge pull request #35 from garnaat/python-refactor
A WIP commit on the new refactor for support of Python and other features
Showing
75 changed files
with
2983 additions
and
635 deletions
... | @@ -3,6 +3,7 @@ python: | ... | @@ -3,6 +3,7 @@ python: |
3 | - "2.7" | 3 | - "2.7" |
4 | - "3.3" | 4 | - "3.3" |
5 | - "3.4" | 5 | - "3.4" |
6 | + - "3.5" | ||
6 | install: | 7 | install: |
7 | - pip install -r requirements.txt | 8 | - pip install -r requirements.txt |
8 | - pip install coverage python-coveralls | 9 | - pip install coverage python-coveralls | ... | ... |
... | @@ -2,4 +2,4 @@ include README.md | ... | @@ -2,4 +2,4 @@ include README.md |
2 | include LICENSE | 2 | include LICENSE |
3 | include requirements.txt | 3 | include requirements.txt |
4 | include kappa/_version | 4 | include kappa/_version |
5 | -recursive-include samples *.js *.yml *.cf *.json | 5 | +recursive-include samples *.js *.py *.yml *.cf *.json *.txt | ... | ... |
... | @@ -22,12 +22,12 @@ in a Push model (e.g. S3, SNS) rather than a Pull model. | ... | @@ -22,12 +22,12 @@ in a Push model (e.g. S3, SNS) rather than a Pull model. |
22 | * Add an event source to the function | 22 | * Add an event source to the function |
23 | * View the output of the live function | 23 | * View the output of the live function |
24 | 24 | ||
25 | -Kappa tries to help you with some of this. It allows you to create an IAM | 25 | +Kappa tries to help you with some of this. It creates all IAM policies for you |
26 | -managed policy or use an existing one. It creates the IAM execution role for | 26 | +based on the resources you have told it you need to access. It creates the IAM |
27 | -you and associates the policy with it. Kappa will zip up the function and | 27 | +execution role for you and associates the policy with it. Kappa will zip up |
28 | -any dependencies and upload them to AWS Lambda. It also sends test data | 28 | +the function and any dependencies and upload them to AWS Lambda. It also sends |
29 | -to the uploaded function and finds the related CloudWatch log stream and | 29 | +test data to the uploaded function and finds the related CloudWatch log stream |
30 | -displays the log events. Finally, it will add the event source to turn | 30 | +and displays the log events. Finally, it will add the event source to turn |
31 | your function on. | 31 | your function on. |
32 | 32 | ||
33 | If you need to make changes, kappa will allow you to easily update your Lambda | 33 | If you need to make changes, kappa will allow you to easily update your Lambda |
... | @@ -45,52 +45,195 @@ Or for the development version: | ... | @@ -45,52 +45,195 @@ Or for the development version: |
45 | pip install git+https://github.com/garnaat/kappa.git | 45 | pip install git+https://github.com/garnaat/kappa.git |
46 | 46 | ||
47 | 47 | ||
48 | -Getting Started | 48 | +Quick Start |
49 | ---------------- | 49 | +----------- |
50 | - | 50 | + |
51 | -Kappa is a command line tool. The basic command format is: | 51 | +To get a feel for how kappa works, let's take a look at a very simple example |
52 | - | 52 | +contained in the ``samples/simple`` directory of the kappa distribution. This |
53 | - kappa <path to config file> <command> [optional command args] | 53 | +example is so simple, in fact, that it doesn't really do anything. It's just a |
54 | - | 54 | +small Lambda function (written in Python) that accepts some JSON input, logs |
55 | -Where ``command`` is one of: | 55 | +that input to CloudWatch logs, and returns a JSON document back. |
56 | - | 56 | + |
57 | -* create - creates the IAM policy (if necessary), the IAM role, and zips and | 57 | +The structure of the directory is: |
58 | - uploads the Lambda function code to the Lambda service | 58 | + |
59 | -* invoke - make a synchronous call to your Lambda function, passing test data | 59 | +``` |
60 | - and display the resulting log data | 60 | +simple/ |
61 | -* invoke_async - make an asynchronous call to your Lambda function passing test | 61 | +├── _src |
62 | - data. | 62 | +│ ├── README.md |
63 | -* dryrun - make the call but only check things like permissions and report | 63 | +│ ├── requirements.txt |
64 | - back. Don't actually run the code. | 64 | +│ ├── setup.cfg |
65 | -* tail - display the most recent log events for the function (remember that it | 65 | +│ └── simple.py |
66 | - can take several minutes before log events are available from CloudWatch) | 66 | +├── _tests |
67 | -* add_event_sources - hook up an event source to your Lambda function | 67 | +│ └── test_one.json |
68 | -* delete - delete the Lambda function, remove any event sources, delete the IAM | 68 | +└── kappa.yml.sample |
69 | - policy and role | 69 | +``` |
70 | -* update_code - Upload new code for your Lambda function | 70 | + |
71 | -* update_event_sources - Update the event sources based on the information in | 71 | +Within the directory we see: |
72 | - your kappa config file | 72 | + |
73 | -* status - display summary information about functions, stacks, and event | 73 | +* `kappa.yml.sample` which is a sample YAML configuration file for the project |
74 | - sources related to your project. | 74 | +* `_src` which is a directory containing the source code for the Lambda function |
75 | - | 75 | +* `_test` which is a directory containing some test data |
76 | -The ``config file`` is a YAML format file containing all of the information | 76 | + |
77 | -about your Lambda function. | 77 | +The first step is to make a copy of the sample configuration file: |
78 | - | 78 | + |
79 | -If you use environment variables for your AWS credentials (as normally supported by boto), | 79 | + $ cd simple |
80 | -simply exclude the ``profile`` element from the YAML file. | 80 | + $ cp kappa.yml.simple kappa.yml |
81 | - | 81 | + |
82 | -An example project based on a Kinesis stream can be found in | 82 | +Now you will need to edit ``kappa.yml`` slightly for your use. The file looks |
83 | -[samples/kinesis](https://github.com/garnaat/kappa/tree/develop/samples/kinesis). | 83 | +like this: |
84 | - | 84 | + |
85 | -The basic workflow is: | 85 | +``` |
86 | - | 86 | +--- |
87 | -* Create your Lambda function | 87 | +name: kappa-simple |
88 | -* Create any custom IAM policy you need to execute your Lambda function | 88 | +environments: |
89 | -* Create some sample data | 89 | + dev: |
90 | -* Create the YAML config file with all of the information | 90 | + profile: <your profile here> |
91 | -* Run ``kappa <path-to-config> create`` to create roles and upload function | 91 | + region: <your region here> |
92 | -* Run ``kappa <path-to-config> invoke`` to invoke the function with test data | 92 | + policy: |
93 | -* Run ``kappa <path-to-config> update_code`` to upload new code for your Lambda | 93 | + resources: |
94 | - function | 94 | + - arn: arn:aws:logs:*:*:* |
95 | -* Run ``kappa <path-to-config> add_event_sources`` to hook your function up to the event source | 95 | + actions: |
96 | -* Run ``kappa <path-to-config> tail`` to see more output | 96 | + - "*" |
97 | + prod: | ||
98 | + profile: <your profile here> | ||
99 | + region: <your region here> | ||
100 | + policy: | ||
101 | + resources: | ||
102 | + - arn: arn:aws:logs:*:*:* | ||
103 | + actions: | ||
104 | + - "*" | ||
105 | +lambda: | ||
106 | + description: A very simple Kappa example | ||
107 | + handler: simple.handler | ||
108 | + runtime: python2.7 | ||
109 | + memory_size: 128 | ||
110 | + timeout: 3 | ||
111 | +``` | ||
112 | + | ||
113 | +The ``name`` at the top is just a name used for this Lambda function and other | ||
114 | +things we create that are related to this Lambda function (e.g. roles, | ||
115 | +policies, etc.). | ||
116 | + | ||
117 | +The ``environments`` section is where we define the different environments into | ||
118 | +which we wish to deploy this Lambda function. Each environment is identified | ||
119 | +by a ``profile`` (as used in the AWS CLI and other AWS tools) and a | ||
120 | +``region``. You can define as many environments as you wish but each | ||
121 | +invocation of ``kappa`` will deal with a single environment. Each environment | ||
122 | +section also includes a ``policy`` section. This is where we tell kappa about | ||
123 | +AWS resources that our Lambda function needs access to and what kind of access | ||
124 | +it requires. For example, your Lambda function may need to read from an SNS | ||
125 | +topic or write to a DynamoDB table and this is where you would provide the ARN | ||
126 | +([Amazon Resource Name](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)) | ||
127 | +that identify those resources. Since this is a very simple example, the only | ||
128 | +resource listed here is for CloudWatch logs so that our Lambda function is able | ||
129 | +to write to the CloudWatch log group that will be created for it automatically | ||
130 | +by AWS Lambda. | ||
131 | + | ||
132 | +The ``lambda`` section contains the configuration information about our Lambda | ||
133 | +function. These values are passed to Lambda when we create the function and | ||
134 | +can be updated at any time after. | ||
135 | + | ||
136 | +To modify this for your own use, you just need to put in the right values for | ||
137 | +``profile`` and ``region`` in one of the environment sections. You can also | ||
138 | +change the names of the environments to be whatever you like but the name | ||
139 | +``dev`` is the default value used by kappa so it's kind of handy to avoid | ||
140 | +typing. | ||
141 | + | ||
142 | +Once you have made the necessary modifications, you should be ready to deploy | ||
143 | +your Lambda function to the AWS Lambda service. To do so, just do this: | ||
144 | + | ||
145 | +``` | ||
146 | +$ kappa deploy | ||
147 | +``` | ||
148 | + | ||
149 | +This assumes you want to deploy the default environment called ``dev`` and that | ||
150 | +you have named your config file ``kappa.yml``. If, instead, you called your | ||
151 | +environment ``test`` and named your config file foo.yml, you would do this: | ||
152 | + | ||
153 | +``` | ||
154 | +$ kappa --env test --config foo.yml deploy | ||
155 | +``` | ||
156 | + | ||
157 | +In either case, you should see output that looks something like this: | ||
158 | + | ||
159 | +``` | ||
160 | +$ kappa deploy | ||
161 | +deploying | ||
162 | +...deploying policy kappa-simple-dev | ||
163 | +...creating function kappa-simple-dev | ||
164 | +done | ||
165 | +$ | ||
166 | +``` | ||
167 | + | ||
168 | +So, what kappa has done is it has created a new Managed Policy called | ||
169 | +``kappa-simple-dev`` that grants access to the CloudWatch Logs service. It has | ||
170 | +also created an IAM role called ``kappa-simple-dev`` that uses that policy. | ||
171 | +And finally it has zipped up our Python code and created a function in AWS | ||
172 | +Lambda called kappa-simple-dev. | ||
173 | + | ||
174 | +To test this out, try this: | ||
175 | + | ||
176 | +``` | ||
177 | +$ kappa invoke _tests/test_one.json | ||
178 | +invoking | ||
179 | +START RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Version: $LATEST | ||
180 | +[DEBUG] 2015-12-08T22:00:15.363Z 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f {u'foo': u'bar', u'fie': u'baz'} | ||
181 | +END RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f | ||
182 | +REPORT RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Duration: 0.40 ms Billed Duration: 100 ms Memory Size: 256 MB Max Memory Used: 23 MB | ||
183 | + | ||
184 | +Response: | ||
185 | +{"status": "success"} | ||
186 | +done | ||
187 | +$ | ||
188 | +``` | ||
189 | + | ||
190 | +We have just called our Lambda function, passing in the contents of the file | ||
191 | +``_tests/test_one.json`` as input to our function. We can see the output of | ||
192 | +the CloudWatch logs for the call and we can see the logging call in the Python | ||
193 | +function that prints out the ``event`` (the data) passed to the function. And | ||
194 | +finally, we can see the Response from the function which, for now, is just a | ||
195 | +hard-coded data structure returned by the function. | ||
196 | + | ||
197 | +Need to make a change in your function, your list of resources, or your | ||
198 | +function configuration? Just go ahead and make the change and then re-run the | ||
199 | +``deploy`` command: | ||
200 | + | ||
201 | + $ kappa deploy | ||
202 | + | ||
203 | +Kappa will figure out what has changed and make the necessary updates for you. | ||
204 | + | ||
205 | +That gives you a quick overview of kappa. To learn more about it, I recommend | ||
206 | +you check out the tutorial. | ||
207 | + | ||
208 | +Policies | ||
209 | +-------- | ||
210 | + | ||
211 | +Hands up who loves writing IAM policies. Yeah, that's what I thought. With | ||
212 | +Kappa, there is a simplified way of writing policies and granting your Lambda | ||
213 | +function the permissions it needs. | ||
214 | + | ||
215 | +The simplified version allows you to specify, in your `kappa.yml` file, the | ||
216 | +ARN of the resource you want to access, and then a list of the API methods you | ||
217 | +want to allow. For example: | ||
218 | + | ||
219 | +``` | ||
220 | +policy: | ||
221 | + resources: | ||
222 | + - arn: arn:aws:logs:*:*:* | ||
223 | + actions: | ||
224 | + - "*" | ||
225 | +``` | ||
226 | + | ||
227 | +To express this using the official IAM policy format, you can instead use a | ||
228 | +statement: | ||
229 | + | ||
230 | +``` | ||
231 | +policy: | ||
232 | + statements: | ||
233 | + - Effect: Allow | ||
234 | + Resource: "*" | ||
235 | + Action: | ||
236 | + - "logs:*" | ||
237 | +``` | ||
238 | + | ||
239 | +Both of these do the same thing. | ... | ... |
bin/kappa
deleted
100755 → 0
1 | -#!/usr/bin/env python | ||
2 | -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ | ||
3 | -# | ||
4 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
5 | -# may not use this file except in compliance with the License. A copy of | ||
6 | -# the License is located at | ||
7 | -# | ||
8 | -# http://aws.amazon.com/apache2.0/ | ||
9 | -# | ||
10 | -# or in the "license" file accompanying this file. This file is | ||
11 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
12 | -# ANY KIND, either express or implied. See the License for the specific | ||
13 | -# language governing permissions and limitations under the License. | ||
14 | -from datetime import datetime | ||
15 | -import logging | ||
16 | -import base64 | ||
17 | - | ||
18 | -import click | ||
19 | - | ||
20 | -from kappa.context import Context | ||
21 | - | ||
22 | - | ||
23 | -@click.group() | ||
24 | -@click.argument( | ||
25 | - 'config', | ||
26 | - type=click.File('rb'), | ||
27 | - envvar='KAPPA_CONFIG', | ||
28 | -) | ||
29 | -@click.option( | ||
30 | - '--debug/--no-debug', | ||
31 | - default=False, | ||
32 | - help='Turn on debugging output' | ||
33 | -) | ||
34 | -@click.pass_context | ||
35 | -def cli(ctx, config=None, debug=False): | ||
36 | - config = config | ||
37 | - ctx.obj['debug'] = debug | ||
38 | - ctx.obj['config'] = config | ||
39 | - | ||
40 | -@cli.command() | ||
41 | -@click.pass_context | ||
42 | -def create(ctx): | ||
43 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
44 | - click.echo('creating...') | ||
45 | - context.create() | ||
46 | - click.echo('...done') | ||
47 | - | ||
48 | -@cli.command() | ||
49 | -@click.pass_context | ||
50 | -def update_code(ctx): | ||
51 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
52 | - click.echo('updating code...') | ||
53 | - context.update_code() | ||
54 | - click.echo('...done') | ||
55 | - | ||
56 | -@cli.command() | ||
57 | -@click.pass_context | ||
58 | -def invoke(ctx): | ||
59 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
60 | - click.echo('invoking...') | ||
61 | - response = context.invoke() | ||
62 | - log_data = base64.b64decode(response['LogResult']) | ||
63 | - click.echo(log_data) | ||
64 | - click.echo('...done') | ||
65 | - | ||
66 | -@cli.command() | ||
67 | -@click.pass_context | ||
68 | -def dryrun(ctx): | ||
69 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
70 | - click.echo('invoking dryrun...') | ||
71 | - response = context.dryrun() | ||
72 | - click.echo(response) | ||
73 | - click.echo('...done') | ||
74 | - | ||
75 | -@cli.command() | ||
76 | -@click.pass_context | ||
77 | -def invoke_async(ctx): | ||
78 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
79 | - click.echo('invoking async...') | ||
80 | - response = context.invoke_async() | ||
81 | - click.echo(response) | ||
82 | - click.echo('...done') | ||
83 | - | ||
84 | -@cli.command() | ||
85 | -@click.pass_context | ||
86 | -def tail(ctx): | ||
87 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
88 | - click.echo('tailing logs...') | ||
89 | - for e in context.tail()[-10:]: | ||
90 | - ts = datetime.utcfromtimestamp(e['timestamp']//1000).isoformat() | ||
91 | - click.echo("{}: {}".format(ts, e['message'])) | ||
92 | - click.echo('...done') | ||
93 | - | ||
94 | -@cli.command() | ||
95 | -@click.pass_context | ||
96 | -def status(ctx): | ||
97 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
98 | - status = context.status() | ||
99 | - click.echo(click.style('Policy', bold=True)) | ||
100 | - if status['policy']: | ||
101 | - line = ' {} ({})'.format( | ||
102 | - status['policy']['PolicyName'], | ||
103 | - status['policy']['Arn']) | ||
104 | - click.echo(click.style(line, fg='green')) | ||
105 | - click.echo(click.style('Role', bold=True)) | ||
106 | - if status['role']: | ||
107 | - line = ' {} ({})'.format( | ||
108 | - status['role']['Role']['RoleName'], | ||
109 | - status['role']['Role']['Arn']) | ||
110 | - click.echo(click.style(line, fg='green')) | ||
111 | - click.echo(click.style('Function', bold=True)) | ||
112 | - if status['function']: | ||
113 | - line = ' {} ({})'.format( | ||
114 | - status['function']['Configuration']['FunctionName'], | ||
115 | - status['function']['Configuration']['FunctionArn']) | ||
116 | - click.echo(click.style(line, fg='green')) | ||
117 | - else: | ||
118 | - click.echo(click.style(' None', fg='green')) | ||
119 | - click.echo(click.style('Event Sources', bold=True)) | ||
120 | - if status['event_sources']: | ||
121 | - for event_source in status['event_sources']: | ||
122 | - if event_source: | ||
123 | - line = ' {}: {}'.format( | ||
124 | - event_source['EventSourceArn'], event_source['State']) | ||
125 | - click.echo(click.style(line, fg='green')) | ||
126 | - else: | ||
127 | - click.echo(click.style(' None', fg='green')) | ||
128 | - | ||
129 | -@cli.command() | ||
130 | -@click.pass_context | ||
131 | -def delete(ctx): | ||
132 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
133 | - click.echo('deleting...') | ||
134 | - context.delete() | ||
135 | - click.echo('...done') | ||
136 | - | ||
137 | -@cli.command() | ||
138 | -@click.pass_context | ||
139 | -def add_event_sources(ctx): | ||
140 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
141 | - click.echo('adding event sources...') | ||
142 | - context.add_event_sources() | ||
143 | - click.echo('...done') | ||
144 | - | ||
145 | -@cli.command() | ||
146 | -@click.pass_context | ||
147 | -def update_event_sources(ctx): | ||
148 | - context = Context(ctx.obj['config'], ctx.obj['debug']) | ||
149 | - click.echo('updating event sources...') | ||
150 | - context.update_event_sources() | ||
151 | - click.echo('...done') | ||
152 | - | ||
153 | - | ||
154 | -if __name__ == '__main__': | ||
155 | - cli(obj={}) |
docs/Makefile
0 → 100644
1 | +# Makefile for Sphinx documentation | ||
2 | +# | ||
3 | + | ||
4 | +# You can set these variables from the command line. | ||
5 | +SPHINXOPTS = | ||
6 | +SPHINXBUILD = sphinx-build | ||
7 | +PAPER = | ||
8 | +BUILDDIR = _build | ||
9 | + | ||
10 | +# User-friendly check for sphinx-build | ||
11 | +ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) | ||
12 | +$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) | ||
13 | +endif | ||
14 | + | ||
15 | +# Internal variables. | ||
16 | +PAPEROPT_a4 = -D latex_paper_size=a4 | ||
17 | +PAPEROPT_letter = -D latex_paper_size=letter | ||
18 | +ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . | ||
19 | +# the i18n builder cannot share the environment and doctrees with the others | ||
20 | +I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . | ||
21 | + | ||
22 | +.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext | ||
23 | + | ||
24 | +help: | ||
25 | + @echo "Please use \`make <target>' where <target> is one of" | ||
26 | + @echo " html to make standalone HTML files" | ||
27 | + @echo " dirhtml to make HTML files named index.html in directories" | ||
28 | + @echo " singlehtml to make a single large HTML file" | ||
29 | + @echo " pickle to make pickle files" | ||
30 | + @echo " json to make JSON files" | ||
31 | + @echo " htmlhelp to make HTML files and a HTML help project" | ||
32 | + @echo " qthelp to make HTML files and a qthelp project" | ||
33 | + @echo " applehelp to make an Apple Help Book" | ||
34 | + @echo " devhelp to make HTML files and a Devhelp project" | ||
35 | + @echo " epub to make an epub" | ||
36 | + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" | ||
37 | + @echo " latexpdf to make LaTeX files and run them through pdflatex" | ||
38 | + @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" | ||
39 | + @echo " text to make text files" | ||
40 | + @echo " man to make manual pages" | ||
41 | + @echo " texinfo to make Texinfo files" | ||
42 | + @echo " info to make Texinfo files and run them through makeinfo" | ||
43 | + @echo " gettext to make PO message catalogs" | ||
44 | + @echo " changes to make an overview of all changed/added/deprecated items" | ||
45 | + @echo " xml to make Docutils-native XML files" | ||
46 | + @echo " pseudoxml to make pseudoxml-XML files for display purposes" | ||
47 | + @echo " linkcheck to check all external links for integrity" | ||
48 | + @echo " doctest to run all doctests embedded in the documentation (if enabled)" | ||
49 | + @echo " coverage to run coverage check of the documentation (if enabled)" | ||
50 | + | ||
51 | +clean: | ||
52 | + rm -rf $(BUILDDIR)/* | ||
53 | + | ||
54 | +html: | ||
55 | + $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html | ||
56 | + @echo | ||
57 | + @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." | ||
58 | + | ||
59 | +dirhtml: | ||
60 | + $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml | ||
61 | + @echo | ||
62 | + @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." | ||
63 | + | ||
64 | +singlehtml: | ||
65 | + $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml | ||
66 | + @echo | ||
67 | + @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." | ||
68 | + | ||
69 | +pickle: | ||
70 | + $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle | ||
71 | + @echo | ||
72 | + @echo "Build finished; now you can process the pickle files." | ||
73 | + | ||
74 | +json: | ||
75 | + $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json | ||
76 | + @echo | ||
77 | + @echo "Build finished; now you can process the JSON files." | ||
78 | + | ||
79 | +htmlhelp: | ||
80 | + $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp | ||
81 | + @echo | ||
82 | + @echo "Build finished; now you can run HTML Help Workshop with the" \ | ||
83 | + ".hhp project file in $(BUILDDIR)/htmlhelp." | ||
84 | + | ||
85 | +qthelp: | ||
86 | + $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp | ||
87 | + @echo | ||
88 | + @echo "Build finished; now you can run "qcollectiongenerator" with the" \ | ||
89 | + ".qhcp project file in $(BUILDDIR)/qthelp, like this:" | ||
90 | + @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/kappa.qhcp" | ||
91 | + @echo "To view the help file:" | ||
92 | + @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/kappa.qhc" | ||
93 | + | ||
94 | +applehelp: | ||
95 | + $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp | ||
96 | + @echo | ||
97 | + @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." | ||
98 | + @echo "N.B. You won't be able to view it unless you put it in" \ | ||
99 | + "~/Library/Documentation/Help or install it in your application" \ | ||
100 | + "bundle." | ||
101 | + | ||
102 | +devhelp: | ||
103 | + $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp | ||
104 | + @echo | ||
105 | + @echo "Build finished." | ||
106 | + @echo "To view the help file:" | ||
107 | + @echo "# mkdir -p $$HOME/.local/share/devhelp/kappa" | ||
108 | + @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/kappa" | ||
109 | + @echo "# devhelp" | ||
110 | + | ||
111 | +epub: | ||
112 | + $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub | ||
113 | + @echo | ||
114 | + @echo "Build finished. The epub file is in $(BUILDDIR)/epub." | ||
115 | + | ||
116 | +latex: | ||
117 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex | ||
118 | + @echo | ||
119 | + @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." | ||
120 | + @echo "Run \`make' in that directory to run these through (pdf)latex" \ | ||
121 | + "(use \`make latexpdf' here to do that automatically)." | ||
122 | + | ||
123 | +latexpdf: | ||
124 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex | ||
125 | + @echo "Running LaTeX files through pdflatex..." | ||
126 | + $(MAKE) -C $(BUILDDIR)/latex all-pdf | ||
127 | + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." | ||
128 | + | ||
129 | +latexpdfja: | ||
130 | + $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex | ||
131 | + @echo "Running LaTeX files through platex and dvipdfmx..." | ||
132 | + $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja | ||
133 | + @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." | ||
134 | + | ||
135 | +text: | ||
136 | + $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text | ||
137 | + @echo | ||
138 | + @echo "Build finished. The text files are in $(BUILDDIR)/text." | ||
139 | + | ||
140 | +man: | ||
141 | + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man | ||
142 | + @echo | ||
143 | + @echo "Build finished. The manual pages are in $(BUILDDIR)/man." | ||
144 | + | ||
145 | +texinfo: | ||
146 | + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo | ||
147 | + @echo | ||
148 | + @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." | ||
149 | + @echo "Run \`make' in that directory to run these through makeinfo" \ | ||
150 | + "(use \`make info' here to do that automatically)." | ||
151 | + | ||
152 | +info: | ||
153 | + $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo | ||
154 | + @echo "Running Texinfo files through makeinfo..." | ||
155 | + make -C $(BUILDDIR)/texinfo info | ||
156 | + @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." | ||
157 | + | ||
158 | +gettext: | ||
159 | + $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale | ||
160 | + @echo | ||
161 | + @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." | ||
162 | + | ||
163 | +changes: | ||
164 | + $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes | ||
165 | + @echo | ||
166 | + @echo "The overview file is in $(BUILDDIR)/changes." | ||
167 | + | ||
168 | +linkcheck: | ||
169 | + $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck | ||
170 | + @echo | ||
171 | + @echo "Link check complete; look for any errors in the above output " \ | ||
172 | + "or in $(BUILDDIR)/linkcheck/output.txt." | ||
173 | + | ||
174 | +doctest: | ||
175 | + $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest | ||
176 | + @echo "Testing of doctests in the sources finished, look at the " \ | ||
177 | + "results in $(BUILDDIR)/doctest/output.txt." | ||
178 | + | ||
179 | +coverage: | ||
180 | + $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage | ||
181 | + @echo "Testing of coverage in the sources finished, look at the " \ | ||
182 | + "results in $(BUILDDIR)/coverage/python.txt." | ||
183 | + | ||
184 | +xml: | ||
185 | + $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml | ||
186 | + @echo | ||
187 | + @echo "Build finished. The XML files are in $(BUILDDIR)/xml." | ||
188 | + | ||
189 | +pseudoxml: | ||
190 | + $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml | ||
191 | + @echo | ||
192 | + @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." |
docs/commands.rst
0 → 100644
1 | +Commands | ||
2 | +======== | ||
3 | + | ||
4 | +Kappa is a command line tool. The basic command format is: | ||
5 | + | ||
6 | +``kappa [options] <command> [optional command args]`` | ||
7 | + | ||
8 | +Available ``options`` are: | ||
9 | + | ||
10 | +* --config <config_file> to specify where to find the kappa config file. The | ||
11 | + default is to look in ``kappa.yml``. | ||
12 | +* --env <environment> to specify which environment in your config file you are | ||
13 | + using. The default is ``dev``. | ||
14 | +* --debug/--no-debug to turn on/off the debug logging. | ||
15 | +* --help to access command line help. | ||
16 | + | ||
17 | +And ``command`` is one of: | ||
18 | + | ||
19 | +* deploy | ||
20 | +* delete | ||
21 | +* invoke | ||
22 | +* tag | ||
23 | +* tail | ||
24 | +* event_sources | ||
25 | +* status | ||
26 | + | ||
27 | +Details of each command are provided below. | ||
28 | + | ||
29 | +deploy | ||
30 | +------ | ||
31 | + | ||
32 | +The ``deploy`` command does whatever is required to deploy the | ||
33 | +current version of your Lambda function such as creating/updating policies and | ||
34 | +roles, creating or updating the function itself, and adding any event sources | ||
35 | +specified in your config file. | ||
36 | + | ||
37 | +When the command is run the first time, it creates all of the relevant | ||
38 | +resources required. On subsequent invocations, it will attempt to determine | ||
39 | +what, if anything, has changed in the project and only update those resources. | ||
40 | + | ||
41 | +delete | ||
42 | +------ | ||
43 | + | ||
44 | +The ``delete`` command deletes the Lambda function, remove any event sources, | ||
45 | +delete the IAM policy and role. | ||
46 | + | ||
47 | +invoke | ||
48 | +------ | ||
49 | + | ||
50 | +The ``invoke`` command makes a synchronous call to your Lambda function, | ||
51 | +passing test data and display the resulting log data and any response returned | ||
52 | +from your Lambda function. | ||
53 | + | ||
54 | +The ``invoke`` command takes one positional argument, the ``data_file``. This | ||
55 | +should be the path to a JSON data file that will be sent to the function as | ||
56 | +data. | ||
57 | + | ||
58 | +tag | ||
59 | +--- | ||
60 | + | ||
61 | +The ``tag`` command tags the current version of the Lambda function with a | ||
62 | +symbolic tag. In Lambda terms, this creates an ``alias``. | ||
63 | + | ||
64 | +The ``tag`` command requires two additional positional arguments: | ||
65 | + | ||
66 | +* name - the name of tag or alias | ||
67 | +* description - the description of the alias | ||
68 | + | ||
69 | +tail | ||
70 | +---- | ||
71 | + | ||
72 | +The ``tail`` command displays the most recent log events for the function | ||
73 | +(remember that it can take several minutes before log events are available from CloudWatch) | ||
74 | + | ||
75 | +test | ||
76 | +---- | ||
77 | + | ||
78 | +The ``test`` command provides a way to run unit tests of code in your Lambda | ||
79 | +function. By default, it uses the ``nose`` Python testrunner but this can be | ||
80 | +overridden my specifying an alternative value using the ``unit_test_runner`` | ||
81 | +attribute in the kappa config file. | ||
82 | + | ||
83 | +When using nose, it expects to find standard Python unit tests in the | ||
84 | +``_tests/unit`` directory of your project. It will then run those tests in an | ||
85 | +environment that also makes any python modules in your ``_src`` directory | ||
86 | +available to the tests. | ||
87 | + | ||
88 | +event_sources | ||
89 | +------------- | ||
90 | + | ||
91 | +The ``event_sources`` command provides access the commands available for | ||
92 | +dealing with event sources. This command takes an additional positional | ||
93 | +argument, ``command``. | ||
94 | + | ||
95 | +* command - the command to run (list|enable|disable) | ||
96 | + | ||
97 | +status | ||
98 | +------ | ||
99 | + | ||
100 | +The ``status`` command displays summary information about functions, stacks, | ||
101 | +and event sources related to your project. |
docs/conf.py
0 → 100644
1 | +# -*- coding: utf-8 -*- | ||
2 | +# | ||
3 | +# kappa documentation build configuration file, created by | ||
4 | +# sphinx-quickstart on Tue Oct 13 12:59:27 2015. | ||
5 | +# | ||
6 | +# This file is execfile()d with the current directory set to its | ||
7 | +# containing dir. | ||
8 | +# | ||
9 | +# Note that not all possible configuration values are present in this | ||
10 | +# autogenerated file. | ||
11 | +# | ||
12 | +# All configuration values have a default; values that are commented out | ||
13 | +# serve to show the default. | ||
14 | + | ||
15 | +import sys | ||
16 | +import os | ||
17 | +import shlex | ||
18 | + | ||
19 | +# If extensions (or modules to document with autodoc) are in another directory, | ||
20 | +# add these directories to sys.path here. If the directory is relative to the | ||
21 | +# documentation root, use os.path.abspath to make it absolute, like shown here. | ||
22 | +#sys.path.insert(0, os.path.abspath('.')) | ||
23 | + | ||
24 | +# -- General configuration ------------------------------------------------ | ||
25 | + | ||
26 | +# If your documentation needs a minimal Sphinx version, state it here. | ||
27 | +#needs_sphinx = '1.0' | ||
28 | + | ||
29 | +# Add any Sphinx extension module names here, as strings. They can be | ||
30 | +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom | ||
31 | +# ones. | ||
32 | +extensions = [ | ||
33 | + 'sphinx.ext.autodoc', | ||
34 | +] | ||
35 | + | ||
36 | +# Add any paths that contain templates here, relative to this directory. | ||
37 | +templates_path = ['_templates'] | ||
38 | + | ||
39 | +# The suffix(es) of source filenames. | ||
40 | +# You can specify multiple suffix as a list of string: | ||
41 | +# source_suffix = ['.rst', '.md'] | ||
42 | +source_suffix = '.rst' | ||
43 | + | ||
44 | +# The encoding of source files. | ||
45 | +#source_encoding = 'utf-8-sig' | ||
46 | + | ||
47 | +# The master toctree document. | ||
48 | +master_doc = 'index' | ||
49 | + | ||
50 | +# General information about the project. | ||
51 | +project = u'kappa' | ||
52 | +copyright = u'2015, Mitch Garnaat' | ||
53 | +author = u'Mitch Garnaat' | ||
54 | + | ||
55 | +# The version info for the project you're documenting, acts as replacement for | ||
56 | +# |version| and |release|, also used in various other places throughout the | ||
57 | +# built documents. | ||
58 | +# | ||
59 | +# The short X.Y version. | ||
60 | +version = '0.4.0' | ||
61 | +# The full version, including alpha/beta/rc tags. | ||
62 | +release = '0.4.0' | ||
63 | + | ||
64 | +# The language for content autogenerated by Sphinx. Refer to documentation | ||
65 | +# for a list of supported languages. | ||
66 | +# | ||
67 | +# This is also used if you do content translation via gettext catalogs. | ||
68 | +# Usually you set "language" from the command line for these cases. | ||
69 | +language = None | ||
70 | + | ||
71 | +# There are two options for replacing |today|: either, you set today to some | ||
72 | +# non-false value, then it is used: | ||
73 | +#today = '' | ||
74 | +# Else, today_fmt is used as the format for a strftime call. | ||
75 | +#today_fmt = '%B %d, %Y' | ||
76 | + | ||
77 | +# List of patterns, relative to source directory, that match files and | ||
78 | +# directories to ignore when looking for source files. | ||
79 | +exclude_patterns = ['_build'] | ||
80 | + | ||
81 | +# The reST default role (used for this markup: `text`) to use for all | ||
82 | +# documents. | ||
83 | +#default_role = None | ||
84 | + | ||
85 | +# If true, '()' will be appended to :func: etc. cross-reference text. | ||
86 | +#add_function_parentheses = True | ||
87 | + | ||
88 | +# If true, the current module name will be prepended to all description | ||
89 | +# unit titles (such as .. function::). | ||
90 | +#add_module_names = True | ||
91 | + | ||
92 | +# If true, sectionauthor and moduleauthor directives will be shown in the | ||
93 | +# output. They are ignored by default. | ||
94 | +#show_authors = False | ||
95 | + | ||
96 | +# The name of the Pygments (syntax highlighting) style to use. | ||
97 | +pygments_style = 'sphinx' | ||
98 | + | ||
99 | +# A list of ignored prefixes for module index sorting. | ||
100 | +#modindex_common_prefix = [] | ||
101 | + | ||
102 | +# If true, keep warnings as "system message" paragraphs in the built documents. | ||
103 | +#keep_warnings = False | ||
104 | + | ||
105 | +# If true, `todo` and `todoList` produce output, else they produce nothing. | ||
106 | +todo_include_todos = False | ||
107 | + | ||
108 | + | ||
109 | +# -- Options for HTML output ---------------------------------------------- | ||
110 | + | ||
111 | +# The theme to use for HTML and HTML Help pages. See the documentation for | ||
112 | +# a list of builtin themes. | ||
113 | +html_theme = 'alabaster' | ||
114 | + | ||
115 | +# Theme options are theme-specific and customize the look and feel of a theme | ||
116 | +# further. For a list of options available for each theme, see the | ||
117 | +# documentation. | ||
118 | +#html_theme_options = {} | ||
119 | + | ||
120 | +# Add any paths that contain custom themes here, relative to this directory. | ||
121 | +#html_theme_path = [] | ||
122 | + | ||
123 | +# The name for this set of Sphinx documents. If None, it defaults to | ||
124 | +# "<project> v<release> documentation". | ||
125 | +#html_title = None | ||
126 | + | ||
127 | +# A shorter title for the navigation bar. Default is the same as html_title. | ||
128 | +#html_short_title = None | ||
129 | + | ||
130 | +# The name of an image file (relative to this directory) to place at the top | ||
131 | +# of the sidebar. | ||
132 | +#html_logo = None | ||
133 | + | ||
134 | +# The name of an image file (within the static path) to use as favicon of the | ||
135 | +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 | ||
136 | +# pixels large. | ||
137 | +#html_favicon = None | ||
138 | + | ||
139 | +# Add any paths that contain custom static files (such as style sheets) here, | ||
140 | +# relative to this directory. They are copied after the builtin static files, | ||
141 | +# so a file named "default.css" will overwrite the builtin "default.css". | ||
142 | +html_static_path = ['_static'] | ||
143 | + | ||
144 | +# Add any extra paths that contain custom files (such as robots.txt or | ||
145 | +# .htaccess) here, relative to this directory. These files are copied | ||
146 | +# directly to the root of the documentation. | ||
147 | +#html_extra_path = [] | ||
148 | + | ||
149 | +# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, | ||
150 | +# using the given strftime format. | ||
151 | +#html_last_updated_fmt = '%b %d, %Y' | ||
152 | + | ||
153 | +# If true, SmartyPants will be used to convert quotes and dashes to | ||
154 | +# typographically correct entities. | ||
155 | +#html_use_smartypants = True | ||
156 | + | ||
157 | +# Custom sidebar templates, maps document names to template names. | ||
158 | +#html_sidebars = {} | ||
159 | + | ||
160 | +# Additional templates that should be rendered to pages, maps page names to | ||
161 | +# template names. | ||
162 | +#html_additional_pages = {} | ||
163 | + | ||
164 | +# If false, no module index is generated. | ||
165 | +#html_domain_indices = True | ||
166 | + | ||
167 | +# If false, no index is generated. | ||
168 | +#html_use_index = True | ||
169 | + | ||
170 | +# If true, the index is split into individual pages for each letter. | ||
171 | +#html_split_index = False | ||
172 | + | ||
173 | +# If true, links to the reST sources are added to the pages. | ||
174 | +#html_show_sourcelink = True | ||
175 | + | ||
176 | +# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. | ||
177 | +#html_show_sphinx = True | ||
178 | + | ||
179 | +# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. | ||
180 | +#html_show_copyright = True | ||
181 | + | ||
182 | +# If true, an OpenSearch description file will be output, and all pages will | ||
183 | +# contain a <link> tag referring to it. The value of this option must be the | ||
184 | +# base URL from which the finished HTML is served. | ||
185 | +#html_use_opensearch = '' | ||
186 | + | ||
187 | +# This is the file name suffix for HTML files (e.g. ".xhtml"). | ||
188 | +#html_file_suffix = None | ||
189 | + | ||
190 | +# Language to be used for generating the HTML full-text search index. | ||
191 | +# Sphinx supports the following languages: | ||
192 | +# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' | ||
193 | +# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr' | ||
194 | +#html_search_language = 'en' | ||
195 | + | ||
196 | +# A dictionary with options for the search language support, empty by default. | ||
197 | +# Now only 'ja' uses this config value | ||
198 | +#html_search_options = {'type': 'default'} | ||
199 | + | ||
200 | +# The name of a javascript file (relative to the configuration directory) that | ||
201 | +# implements a search results scorer. If empty, the default will be used. | ||
202 | +#html_search_scorer = 'scorer.js' | ||
203 | + | ||
204 | +# Output file base name for HTML help builder. | ||
205 | +htmlhelp_basename = 'kappadoc' | ||
206 | + | ||
207 | +# -- Options for LaTeX output --------------------------------------------- | ||
208 | + | ||
209 | +latex_elements = { | ||
210 | +# The paper size ('letterpaper' or 'a4paper'). | ||
211 | +#'papersize': 'letterpaper', | ||
212 | + | ||
213 | +# The font size ('10pt', '11pt' or '12pt'). | ||
214 | +#'pointsize': '10pt', | ||
215 | + | ||
216 | +# Additional stuff for the LaTeX preamble. | ||
217 | +#'preamble': '', | ||
218 | + | ||
219 | +# Latex figure (float) alignment | ||
220 | +#'figure_align': 'htbp', | ||
221 | +} | ||
222 | + | ||
223 | +# Grouping the document tree into LaTeX files. List of tuples | ||
224 | +# (source start file, target name, title, | ||
225 | +# author, documentclass [howto, manual, or own class]). | ||
226 | +latex_documents = [ | ||
227 | + (master_doc, 'kappa.tex', u'kappa Documentation', | ||
228 | + u'Mitch Garnaat', 'manual'), | ||
229 | +] | ||
230 | + | ||
231 | +# The name of an image file (relative to this directory) to place at the top of | ||
232 | +# the title page. | ||
233 | +#latex_logo = None | ||
234 | + | ||
235 | +# For "manual" documents, if this is true, then toplevel headings are parts, | ||
236 | +# not chapters. | ||
237 | +#latex_use_parts = False | ||
238 | + | ||
239 | +# If true, show page references after internal links. | ||
240 | +#latex_show_pagerefs = False | ||
241 | + | ||
242 | +# If true, show URL addresses after external links. | ||
243 | +#latex_show_urls = False | ||
244 | + | ||
245 | +# Documents to append as an appendix to all manuals. | ||
246 | +#latex_appendices = [] | ||
247 | + | ||
248 | +# If false, no module index is generated. | ||
249 | +#latex_domain_indices = True | ||
250 | + | ||
251 | + | ||
252 | +# -- Options for manual page output --------------------------------------- | ||
253 | + | ||
254 | +# One entry per manual page. List of tuples | ||
255 | +# (source start file, name, description, authors, manual section). | ||
256 | +man_pages = [ | ||
257 | + (master_doc, 'kappa', u'kappa Documentation', | ||
258 | + [author], 1) | ||
259 | +] | ||
260 | + | ||
261 | +# If true, show URL addresses after external links. | ||
262 | +#man_show_urls = False | ||
263 | + | ||
264 | + | ||
265 | +# -- Options for Texinfo output ------------------------------------------- | ||
266 | + | ||
267 | +# Grouping the document tree into Texinfo files. List of tuples | ||
268 | +# (source start file, target name, title, author, | ||
269 | +# dir menu entry, description, category) | ||
270 | +texinfo_documents = [ | ||
271 | + (master_doc, 'kappa', u'kappa Documentation', | ||
272 | + author, 'kappa', 'One line description of project.', | ||
273 | + 'Miscellaneous'), | ||
274 | +] | ||
275 | + | ||
276 | +# Documents to append as an appendix to all manuals. | ||
277 | +#texinfo_appendices = [] | ||
278 | + | ||
279 | +# If false, no module index is generated. | ||
280 | +#texinfo_domain_indices = True | ||
281 | + | ||
282 | +# How to display URL addresses: 'footnote', 'no', or 'inline'. | ||
283 | +#texinfo_show_urls = 'footnote' | ||
284 | + | ||
285 | +# If true, do not generate a @detailmenu in the "Top" node's menu. | ||
286 | +#texinfo_no_detailmenu = False |
docs/config_file_example.rst
0 → 100644
1 | +The Config File | ||
2 | +=============== | ||
3 | + | ||
4 | +The config file is at the heart of kappa. It is what describes your functions | ||
5 | +and drives your deployments. This section provides a reference for all of the | ||
6 | +elements of the kappa config file. | ||
7 | + | ||
8 | + | ||
9 | +Example | ||
10 | +------- | ||
11 | + | ||
12 | +Here is an example config file showing all possible sections. | ||
13 | + | ||
14 | +.. sourcecode:: yaml | ||
15 | + :linenos: | ||
16 | + | ||
17 | + --- | ||
18 | + name: kappa-python-sample | ||
19 | + environments: | ||
20 | + env1: | ||
21 | + profile: profile1 | ||
22 | + region: us-west-2 | ||
23 | + policy: | ||
24 | + resources: | ||
25 | + - arn: arn:aws:dynamodb:us-west-2:123456789012:table/foo | ||
26 | + actions: | ||
27 | + - "*" | ||
28 | + - arn: arn:aws:logs:*:*:* | ||
29 | + actions: | ||
30 | + - "*" | ||
31 | + event_sources: | ||
32 | + - | ||
33 | + arn: arn:aws:kinesis:us-west-2:123456789012:stream/foo | ||
34 | + starting_position: LATEST | ||
35 | + batch_size: 100 | ||
36 | + env2: | ||
37 | + profile: profile2 | ||
38 | + region: us-west-2 | ||
39 | + policy_resources: | ||
40 | + - arn: arn:aws:dynamodb:us-west-2:234567890123:table/foo | ||
41 | + actions: | ||
42 | + - "*" | ||
43 | + - arn: arn:aws:logs:*:*:* | ||
44 | + actions: | ||
45 | + - "*" | ||
46 | + event_sources: | ||
47 | + - | ||
48 | + arn: arn:aws:kinesis:us-west-2:234567890123:stream/foo | ||
49 | + starting_position: LATEST | ||
50 | + batch_size: 100 | ||
51 | + lambda: | ||
52 | + description: A simple Python sample | ||
53 | + handler: simple.handler | ||
54 | + runtime: python2.7 | ||
55 | + memory_size: 256 | ||
56 | + timeout: 3 | ||
57 | + vpc_config: | ||
58 | + security_group_ids: | ||
59 | + - sg-12345678 | ||
60 | + - sg-23456789 | ||
61 | + subnet_ids: | ||
62 | + - subnet-12345678 | ||
63 | + - subnet-23456789 | ||
64 | + | ||
65 | + | ||
66 | +Explanations: | ||
67 | + | ||
68 | +=========== ============================================================= | ||
69 | +Line Number Description | ||
70 | +=========== ============================================================= | ||
71 | +2 This name will be used to name the function itself as well as | ||
72 | + any policies and roles created for use by the function. | ||
73 | +3 A map of environments. Each environment represents one | ||
74 | + possible deployment target. For example, you might have a | ||
75 | + dev and a prod. The names can be whatever you want but the | ||
76 | + environment names are specified using the --env option when | ||
77 | + you deploy. | ||
78 | +5 The profile name associated with this environment. This | ||
79 | + refers to a profile in your AWS credential file. | ||
80 | +6 The AWS region associated with this environment. | ||
81 | +7 This section defines the elements of the IAM policy that will | ||
82 | + be created for this function in this environment. | ||
83 | +9 Each resource your function needs access to needs to be | ||
84 | + listed here. Provide the ARN of the resource as well as | ||
85 | + a list of actions. This could be wildcarded to allow all | ||
86 | + actions but preferably should list the specific actions you | ||
87 | + want to allow. | ||
88 | +15 If your Lambda function has any event sources, this would be | ||
89 | + where you list them. Here, the example shows a Kinesis | ||
90 | + stream but this could also be a DynamoDB stream, an SNS | ||
91 | + topic, or an S3 bucket. | ||
92 | +18 For Kinesis streams and DynamoDB streams, you can specify | ||
93 | + the starting position (one of LATEST or TRIM_HORIZON) and | ||
94 | + the batch size. | ||
95 | +35 This section contains settings specify to your Lambda | ||
96 | + function. See the Lambda docs for details on these. | ||
97 | +=========== ============================================================= |
docs/index.rst
0 → 100644
1 | +.. kappa documentation master file, created by | ||
2 | + sphinx-quickstart on Tue Oct 13 12:59:27 2015. | ||
3 | + You can adapt this file completely to your liking, but it should at least | ||
4 | + contain the root `toctree` directive. | ||
5 | + | ||
6 | +Welcome to kappa's documentation | ||
7 | +================================ | ||
8 | + | ||
9 | +Contents: | ||
10 | + | ||
11 | +.. toctree:: | ||
12 | + :maxdepth: 2 | ||
13 | + | ||
14 | + why | ||
15 | + how | ||
16 | + config_file_example | ||
17 | + commands | ||
18 | + | ||
19 | + | ||
20 | + | ||
21 | +Indices and tables | ||
22 | +================== | ||
23 | + | ||
24 | +* :ref:`genindex` | ||
25 | +* :ref:`modindex` | ||
26 | +* :ref:`search` | ||
27 | + |
docs/make.bat
0 → 100644
1 | +@ECHO OFF | ||
2 | + | ||
3 | +REM Command file for Sphinx documentation | ||
4 | + | ||
5 | +if "%SPHINXBUILD%" == "" ( | ||
6 | + set SPHINXBUILD=sphinx-build | ||
7 | +) | ||
8 | +set BUILDDIR=_build | ||
9 | +set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . | ||
10 | +set I18NSPHINXOPTS=%SPHINXOPTS% . | ||
11 | +if NOT "%PAPER%" == "" ( | ||
12 | + set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% | ||
13 | + set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% | ||
14 | +) | ||
15 | + | ||
16 | +if "%1" == "" goto help | ||
17 | + | ||
18 | +if "%1" == "help" ( | ||
19 | + :help | ||
20 | + echo.Please use `make ^<target^>` where ^<target^> is one of | ||
21 | + echo. html to make standalone HTML files | ||
22 | + echo. dirhtml to make HTML files named index.html in directories | ||
23 | + echo. singlehtml to make a single large HTML file | ||
24 | + echo. pickle to make pickle files | ||
25 | + echo. json to make JSON files | ||
26 | + echo. htmlhelp to make HTML files and a HTML help project | ||
27 | + echo. qthelp to make HTML files and a qthelp project | ||
28 | + echo. devhelp to make HTML files and a Devhelp project | ||
29 | + echo. epub to make an epub | ||
30 | + echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter | ||
31 | + echo. text to make text files | ||
32 | + echo. man to make manual pages | ||
33 | + echo. texinfo to make Texinfo files | ||
34 | + echo. gettext to make PO message catalogs | ||
35 | + echo. changes to make an overview over all changed/added/deprecated items | ||
36 | + echo. xml to make Docutils-native XML files | ||
37 | + echo. pseudoxml to make pseudoxml-XML files for display purposes | ||
38 | + echo. linkcheck to check all external links for integrity | ||
39 | + echo. doctest to run all doctests embedded in the documentation if enabled | ||
40 | + echo. coverage to run coverage check of the documentation if enabled | ||
41 | + goto end | ||
42 | +) | ||
43 | + | ||
44 | +if "%1" == "clean" ( | ||
45 | + for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i | ||
46 | + del /q /s %BUILDDIR%\* | ||
47 | + goto end | ||
48 | +) | ||
49 | + | ||
50 | + | ||
51 | +REM Check if sphinx-build is available and fallback to Python version if any | ||
52 | +%SPHINXBUILD% 2> nul | ||
53 | +if errorlevel 9009 goto sphinx_python | ||
54 | +goto sphinx_ok | ||
55 | + | ||
56 | +:sphinx_python | ||
57 | + | ||
58 | +set SPHINXBUILD=python -m sphinx.__init__ | ||
59 | +%SPHINXBUILD% 2> nul | ||
60 | +if errorlevel 9009 ( | ||
61 | + echo. | ||
62 | + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx | ||
63 | + echo.installed, then set the SPHINXBUILD environment variable to point | ||
64 | + echo.to the full path of the 'sphinx-build' executable. Alternatively you | ||
65 | + echo.may add the Sphinx directory to PATH. | ||
66 | + echo. | ||
67 | + echo.If you don't have Sphinx installed, grab it from | ||
68 | + echo.http://sphinx-doc.org/ | ||
69 | + exit /b 1 | ||
70 | +) | ||
71 | + | ||
72 | +:sphinx_ok | ||
73 | + | ||
74 | + | ||
75 | +if "%1" == "html" ( | ||
76 | + %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html | ||
77 | + if errorlevel 1 exit /b 1 | ||
78 | + echo. | ||
79 | + echo.Build finished. The HTML pages are in %BUILDDIR%/html. | ||
80 | + goto end | ||
81 | +) | ||
82 | + | ||
83 | +if "%1" == "dirhtml" ( | ||
84 | + %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml | ||
85 | + if errorlevel 1 exit /b 1 | ||
86 | + echo. | ||
87 | + echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. | ||
88 | + goto end | ||
89 | +) | ||
90 | + | ||
91 | +if "%1" == "singlehtml" ( | ||
92 | + %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml | ||
93 | + if errorlevel 1 exit /b 1 | ||
94 | + echo. | ||
95 | + echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. | ||
96 | + goto end | ||
97 | +) | ||
98 | + | ||
99 | +if "%1" == "pickle" ( | ||
100 | + %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle | ||
101 | + if errorlevel 1 exit /b 1 | ||
102 | + echo. | ||
103 | + echo.Build finished; now you can process the pickle files. | ||
104 | + goto end | ||
105 | +) | ||
106 | + | ||
107 | +if "%1" == "json" ( | ||
108 | + %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json | ||
109 | + if errorlevel 1 exit /b 1 | ||
110 | + echo. | ||
111 | + echo.Build finished; now you can process the JSON files. | ||
112 | + goto end | ||
113 | +) | ||
114 | + | ||
115 | +if "%1" == "htmlhelp" ( | ||
116 | + %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp | ||
117 | + if errorlevel 1 exit /b 1 | ||
118 | + echo. | ||
119 | + echo.Build finished; now you can run HTML Help Workshop with the ^ | ||
120 | +.hhp project file in %BUILDDIR%/htmlhelp. | ||
121 | + goto end | ||
122 | +) | ||
123 | + | ||
124 | +if "%1" == "qthelp" ( | ||
125 | + %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp | ||
126 | + if errorlevel 1 exit /b 1 | ||
127 | + echo. | ||
128 | + echo.Build finished; now you can run "qcollectiongenerator" with the ^ | ||
129 | +.qhcp project file in %BUILDDIR%/qthelp, like this: | ||
130 | + echo.^> qcollectiongenerator %BUILDDIR%\qthelp\kappa.qhcp | ||
131 | + echo.To view the help file: | ||
132 | + echo.^> assistant -collectionFile %BUILDDIR%\qthelp\kappa.ghc | ||
133 | + goto end | ||
134 | +) | ||
135 | + | ||
136 | +if "%1" == "devhelp" ( | ||
137 | + %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp | ||
138 | + if errorlevel 1 exit /b 1 | ||
139 | + echo. | ||
140 | + echo.Build finished. | ||
141 | + goto end | ||
142 | +) | ||
143 | + | ||
144 | +if "%1" == "epub" ( | ||
145 | + %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub | ||
146 | + if errorlevel 1 exit /b 1 | ||
147 | + echo. | ||
148 | + echo.Build finished. The epub file is in %BUILDDIR%/epub. | ||
149 | + goto end | ||
150 | +) | ||
151 | + | ||
152 | +if "%1" == "latex" ( | ||
153 | + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex | ||
154 | + if errorlevel 1 exit /b 1 | ||
155 | + echo. | ||
156 | + echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. | ||
157 | + goto end | ||
158 | +) | ||
159 | + | ||
160 | +if "%1" == "latexpdf" ( | ||
161 | + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex | ||
162 | + cd %BUILDDIR%/latex | ||
163 | + make all-pdf | ||
164 | + cd %~dp0 | ||
165 | + echo. | ||
166 | + echo.Build finished; the PDF files are in %BUILDDIR%/latex. | ||
167 | + goto end | ||
168 | +) | ||
169 | + | ||
170 | +if "%1" == "latexpdfja" ( | ||
171 | + %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex | ||
172 | + cd %BUILDDIR%/latex | ||
173 | + make all-pdf-ja | ||
174 | + cd %~dp0 | ||
175 | + echo. | ||
176 | + echo.Build finished; the PDF files are in %BUILDDIR%/latex. | ||
177 | + goto end | ||
178 | +) | ||
179 | + | ||
180 | +if "%1" == "text" ( | ||
181 | + %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text | ||
182 | + if errorlevel 1 exit /b 1 | ||
183 | + echo. | ||
184 | + echo.Build finished. The text files are in %BUILDDIR%/text. | ||
185 | + goto end | ||
186 | +) | ||
187 | + | ||
188 | +if "%1" == "man" ( | ||
189 | + %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man | ||
190 | + if errorlevel 1 exit /b 1 | ||
191 | + echo. | ||
192 | + echo.Build finished. The manual pages are in %BUILDDIR%/man. | ||
193 | + goto end | ||
194 | +) | ||
195 | + | ||
196 | +if "%1" == "texinfo" ( | ||
197 | + %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo | ||
198 | + if errorlevel 1 exit /b 1 | ||
199 | + echo. | ||
200 | + echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. | ||
201 | + goto end | ||
202 | +) | ||
203 | + | ||
204 | +if "%1" == "gettext" ( | ||
205 | + %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale | ||
206 | + if errorlevel 1 exit /b 1 | ||
207 | + echo. | ||
208 | + echo.Build finished. The message catalogs are in %BUILDDIR%/locale. | ||
209 | + goto end | ||
210 | +) | ||
211 | + | ||
212 | +if "%1" == "changes" ( | ||
213 | + %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes | ||
214 | + if errorlevel 1 exit /b 1 | ||
215 | + echo. | ||
216 | + echo.The overview file is in %BUILDDIR%/changes. | ||
217 | + goto end | ||
218 | +) | ||
219 | + | ||
220 | +if "%1" == "linkcheck" ( | ||
221 | + %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck | ||
222 | + if errorlevel 1 exit /b 1 | ||
223 | + echo. | ||
224 | + echo.Link check complete; look for any errors in the above output ^ | ||
225 | +or in %BUILDDIR%/linkcheck/output.txt. | ||
226 | + goto end | ||
227 | +) | ||
228 | + | ||
229 | +if "%1" == "doctest" ( | ||
230 | + %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest | ||
231 | + if errorlevel 1 exit /b 1 | ||
232 | + echo. | ||
233 | + echo.Testing of doctests in the sources finished, look at the ^ | ||
234 | +results in %BUILDDIR%/doctest/output.txt. | ||
235 | + goto end | ||
236 | +) | ||
237 | + | ||
238 | +if "%1" == "coverage" ( | ||
239 | + %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage | ||
240 | + if errorlevel 1 exit /b 1 | ||
241 | + echo. | ||
242 | + echo.Testing of coverage in the sources finished, look at the ^ | ||
243 | +results in %BUILDDIR%/coverage/python.txt. | ||
244 | + goto end | ||
245 | +) | ||
246 | + | ||
247 | +if "%1" == "xml" ( | ||
248 | + %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml | ||
249 | + if errorlevel 1 exit /b 1 | ||
250 | + echo. | ||
251 | + echo.Build finished. The XML files are in %BUILDDIR%/xml. | ||
252 | + goto end | ||
253 | +) | ||
254 | + | ||
255 | +if "%1" == "pseudoxml" ( | ||
256 | + %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml | ||
257 | + if errorlevel 1 exit /b 1 | ||
258 | + echo. | ||
259 | + echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. | ||
260 | + goto end | ||
261 | +) | ||
262 | + | ||
263 | +:end |
docs/why.rst
0 → 100644
1 | +Why kappa? | ||
2 | +========== | ||
3 | + | ||
4 | +You can do everything kappa does by using the AWS Management Console so why use | ||
5 | +kappa? Basically, because using GUI interfaces to drive your production | ||
6 | +environment is a really bad idea. You can't really automate GUI interfaces, | ||
7 | +you can't debug GUI interfaces, and you can't easily share techniques and best | ||
8 | +practices with a GUI. | ||
9 | + | ||
10 | +The goal of kappa is to put everything about your AWS Lambda function into | ||
11 | +files on a filesystem which can be easily versioned and shared. Once your | ||
12 | +files are in git, people on your team can create pull requests to merge new | ||
13 | +changes in and those pull requests can be reviewed, commented on, and | ||
14 | +eventually approved. This is a tried and true approach that has worked for | ||
15 | +more traditional deployment methodologies and will also work for AWS Lambda. |
1 | -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ | 1 | +# Copyright (c) 2014, 2015 Mitch Garnaat |
2 | # | 2 | # |
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4 | -# may not use this file except in compliance with the License. A copy of | 4 | +# you may not use this file except in compliance with the License. |
5 | -# the License is located at | 5 | +# You may obtain a copy of the License at |
6 | # | 6 | # |
7 | -# http://aws.amazon.com/apache2.0/ | 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | 8 | # |
9 | -# or in the "license" file accompanying this file. This file is | 9 | +# Unless required by applicable law or agreed to in writing, software |
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
11 | -# ANY KIND, either express or implied. See the License for the specific | 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | -# language governing permissions and limitations under the License. | 12 | +# See the License for the specific language governing permissions and |
13 | +# limitations under the License. | ||
13 | 14 | ||
14 | import os | 15 | import os |
15 | 16 | ... | ... |
kappa/aws.py
deleted
100644 → 0
1 | -# Copyright (c) 2014,2015 Mitch Garnaat http://garnaat.org/ | ||
2 | -# | ||
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
4 | -# may not use this file except in compliance with the License. A copy of | ||
5 | -# the License is located at | ||
6 | -# | ||
7 | -# http://aws.amazon.com/apache2.0/ | ||
8 | -# | ||
9 | -# or in the "license" file accompanying this file. This file is | ||
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
11 | -# ANY KIND, either express or implied. See the License for the specific | ||
12 | -# language governing permissions and limitations under the License. | ||
13 | - | ||
14 | -import boto3 | ||
15 | - | ||
16 | - | ||
17 | -class __AWS(object): | ||
18 | - | ||
19 | - def __init__(self, profile_name=None, region_name=None): | ||
20 | - self._client_cache = {} | ||
21 | - self._session = boto3.session.Session( | ||
22 | - region_name=region_name, profile_name=profile_name) | ||
23 | - | ||
24 | - def create_client(self, client_name): | ||
25 | - if client_name not in self._client_cache: | ||
26 | - self._client_cache[client_name] = self._session.client( | ||
27 | - client_name) | ||
28 | - return self._client_cache[client_name] | ||
29 | - | ||
30 | - | ||
31 | -__Singleton_AWS = None | ||
32 | - | ||
33 | - | ||
34 | -def get_aws(context): | ||
35 | - global __Singleton_AWS | ||
36 | - if __Singleton_AWS is None: | ||
37 | - __Singleton_AWS = __AWS(context.profile, context.region) | ||
38 | - return __Singleton_AWS |
kappa/awsclient.py
0 → 100644
1 | +# Copyright (c) 2015 Mitch Garnaat | ||
2 | +# | ||
3 | +# Licensed under the Apache License, Version 2.0 (the "License"); | ||
4 | +# you may not use this file except in compliance with the License. | ||
5 | +# You may obtain a copy of the License at | ||
6 | +# | ||
7 | +# http://www.apache.org/licenses/LICENSE-2.0 | ||
8 | +# | ||
9 | +# Unless required by applicable law or agreed to in writing, software | ||
10 | +# distributed under the License is distributed on an "AS IS" BASIS, | ||
11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
12 | +# See the License for the specific language governing permissions and | ||
13 | +# limitations under the License. | ||
14 | + | ||
15 | +import logging | ||
16 | + | ||
17 | +import jmespath | ||
18 | +import boto3 | ||
19 | + | ||
20 | + | ||
21 | +LOG = logging.getLogger(__name__) | ||
22 | + | ||
23 | +_session_cache = {} | ||
24 | + | ||
25 | + | ||
26 | +class AWSClient(object): | ||
27 | + | ||
28 | + def __init__(self, service_name, session): | ||
29 | + self._service_name = service_name | ||
30 | + self._session = session | ||
31 | + self.client = self._create_client() | ||
32 | + | ||
33 | + @property | ||
34 | + def service_name(self): | ||
35 | + return self._service_name | ||
36 | + | ||
37 | + @property | ||
38 | + def session(self): | ||
39 | + return self._session | ||
40 | + | ||
41 | + @property | ||
42 | + def region_name(self): | ||
43 | + return self.client.meta.region_name | ||
44 | + | ||
45 | + def _create_client(self): | ||
46 | + client = self._session.client(self._service_name) | ||
47 | + return client | ||
48 | + | ||
49 | + def call(self, op_name, query=None, **kwargs): | ||
50 | + """ | ||
51 | + Make a request to a method in this client. The response data is | ||
52 | + returned from this call as native Python data structures. | ||
53 | + | ||
54 | + This method differs from just calling the client method directly | ||
55 | + in the following ways: | ||
56 | + | ||
57 | + * It automatically handles the pagination rather than | ||
58 | + relying on a separate pagination method call. | ||
59 | + * You can pass an optional jmespath query and this query | ||
60 | + will be applied to the data returned from the low-level | ||
61 | + call. This allows you to tailor the returned data to be | ||
62 | + exactly what you want. | ||
63 | + | ||
64 | + :type op_name: str | ||
65 | + :param op_name: The name of the request you wish to make. | ||
66 | + | ||
67 | + :type query: str | ||
68 | + :param query: A jmespath query that will be applied to the | ||
69 | + data returned by the operation prior to returning | ||
70 | + it to the user. | ||
71 | + | ||
72 | + :type kwargs: keyword arguments | ||
73 | + :param kwargs: Additional keyword arguments you want to pass | ||
74 | + to the method when making the request. | ||
75 | + """ | ||
76 | + LOG.debug(kwargs) | ||
77 | + if query: | ||
78 | + query = jmespath.compile(query) | ||
79 | + if self.client.can_paginate(op_name): | ||
80 | + paginator = self.client.get_paginator(op_name) | ||
81 | + results = paginator.paginate(**kwargs) | ||
82 | + data = results.build_full_result() | ||
83 | + else: | ||
84 | + op = getattr(self.client, op_name) | ||
85 | + data = op(**kwargs) | ||
86 | + if query: | ||
87 | + data = query.search(data) | ||
88 | + return data | ||
89 | + | ||
90 | + | ||
91 | +def create_session(profile_name, region_name): | ||
92 | + global _session_cache | ||
93 | + session_key = '{}:{}'.format(profile_name, region_name) | ||
94 | + if session_key not in _session_cache: | ||
95 | + session = boto3.session.Session( | ||
96 | + region_name=region_name, profile_name=profile_name) | ||
97 | + _session_cache[session_key] = session | ||
98 | + return _session_cache[session_key] | ||
99 | + | ||
100 | + | ||
101 | +def create_client(service_name, session): | ||
102 | + return AWSClient(service_name, session) |
1 | -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ | 1 | +# Copyright (c) 2014, 2015 Mitch Garnaat |
2 | # | 2 | # |
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4 | -# may not use this file except in compliance with the License. A copy of | 4 | +# you may not use this file except in compliance with the License. |
5 | -# the License is located at | 5 | +# You may obtain a copy of the License at |
6 | # | 6 | # |
7 | -# http://aws.amazon.com/apache2.0/ | 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | 8 | # |
9 | -# or in the "license" file accompanying this file. This file is | 9 | +# Unless required by applicable law or agreed to in writing, software |
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
11 | -# ANY KIND, either express or implied. See the License for the specific | 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | -# language governing permissions and limitations under the License. | 12 | +# See the License for the specific language governing permissions and |
13 | +# limitations under the License. | ||
13 | 14 | ||
14 | import logging | 15 | import logging |
15 | import yaml | 16 | import yaml |
16 | import time | 17 | import time |
18 | +import os | ||
19 | +import shutil | ||
17 | 20 | ||
18 | import kappa.function | 21 | import kappa.function |
22 | +import kappa.restapi | ||
19 | import kappa.event_source | 23 | import kappa.event_source |
20 | import kappa.policy | 24 | import kappa.policy |
21 | import kappa.role | 25 | import kappa.role |
26 | +import kappa.awsclient | ||
27 | + | ||
28 | +import placebo | ||
22 | 29 | ||
23 | LOG = logging.getLogger(__name__) | 30 | LOG = logging.getLogger(__name__) |
24 | 31 | ||
25 | DebugFmtString = '%(asctime)s - %(name)s - %(levelname)s - %(message)s' | 32 | DebugFmtString = '%(asctime)s - %(name)s - %(levelname)s - %(message)s' |
26 | -InfoFmtString = '\t%(message)s' | 33 | +InfoFmtString = '...%(message)s' |
27 | 34 | ||
28 | 35 | ||
29 | class Context(object): | 36 | class Context(object): |
30 | 37 | ||
31 | - def __init__(self, config_file, debug=False): | 38 | + def __init__(self, config_file, environment=None, |
39 | + debug=False, recording_path=None): | ||
32 | if debug: | 40 | if debug: |
33 | self.set_logger('kappa', logging.DEBUG) | 41 | self.set_logger('kappa', logging.DEBUG) |
34 | else: | 42 | else: |
35 | self.set_logger('kappa', logging.INFO) | 43 | self.set_logger('kappa', logging.INFO) |
44 | + self._load_cache() | ||
36 | self.config = yaml.load(config_file) | 45 | self.config = yaml.load(config_file) |
37 | - if 'policy' in self.config.get('iam', ''): | 46 | + self.environment = environment |
47 | + profile = self.config['environments'][self.environment]['profile'] | ||
48 | + region = self.config['environments'][self.environment]['region'] | ||
49 | + self.session = kappa.awsclient.create_session(profile, region) | ||
50 | + if recording_path: | ||
51 | + self.pill = placebo.attach(self.session, recording_path) | ||
52 | + self.pill.record() | ||
38 | self.policy = kappa.policy.Policy( | 53 | self.policy = kappa.policy.Policy( |
39 | - self, self.config['iam']['policy']) | 54 | + self, self.config['environments'][self.environment]) |
40 | - else: | ||
41 | - self.policy = None | ||
42 | - if 'role' in self.config.get('iam', ''): | ||
43 | self.role = kappa.role.Role( | 55 | self.role = kappa.role.Role( |
44 | - self, self.config['iam']['role']) | 56 | + self, self.config['environments'][self.environment]) |
45 | - else: | ||
46 | - self.role = None | ||
47 | self.function = kappa.function.Function( | 57 | self.function = kappa.function.Function( |
48 | self, self.config['lambda']) | 58 | self, self.config['lambda']) |
59 | + if 'restapi' in self.config: | ||
60 | + self.restapi = kappa.restapi.RestApi( | ||
61 | + self, self.config['restapi']) | ||
62 | + else: | ||
63 | + self.restapi = None | ||
49 | self.event_sources = [] | 64 | self.event_sources = [] |
50 | self._create_event_sources() | 65 | self._create_event_sources() |
51 | 66 | ||
67 | + def _load_cache(self): | ||
68 | + self.cache = {} | ||
69 | + if os.path.isdir('.kappa'): | ||
70 | + cache_file = os.path.join('.kappa', 'cache') | ||
71 | + if os.path.isfile(cache_file): | ||
72 | + with open(cache_file, 'r') as fp: | ||
73 | + self.cache = yaml.load(fp) | ||
74 | + | ||
75 | + def _delete_cache(self): | ||
76 | + if os.path.isdir('.kappa'): | ||
77 | + shutil.rmtree('.kappa') | ||
78 | + self.cache = {} | ||
79 | + | ||
80 | + def _save_cache(self): | ||
81 | + if not os.path.isdir('.kappa'): | ||
82 | + os.mkdir('.kappa') | ||
83 | + cache_file = os.path.join('.kappa', 'cache') | ||
84 | + with open(cache_file, 'w') as fp: | ||
85 | + yaml.dump(self.cache, fp) | ||
86 | + | ||
87 | + def get_cache_value(self, key): | ||
88 | + return self.cache.setdefault(self.environment, dict()).get(key) | ||
89 | + | ||
90 | + def set_cache_value(self, key, value): | ||
91 | + self.cache.setdefault( | ||
92 | + self.environment, dict())[key] = value.encode('utf-8') | ||
93 | + self._save_cache() | ||
94 | + | ||
95 | + @property | ||
96 | + def name(self): | ||
97 | + return self.config.get('name', os.path.basename(os.getcwd())) | ||
98 | + | ||
52 | @property | 99 | @property |
53 | def profile(self): | 100 | def profile(self): |
54 | - return self.config.get('profile', None) | 101 | + return self.config['environments'][self.environment]['profile'] |
55 | 102 | ||
56 | @property | 103 | @property |
57 | def region(self): | 104 | def region(self): |
58 | - return self.config.get('region', None) | 105 | + return self.config['environments'][self.environment]['region'] |
106 | + | ||
107 | + @property | ||
108 | + def record(self): | ||
109 | + return self.config.get('record', False) | ||
59 | 110 | ||
60 | @property | 111 | @property |
61 | def lambda_config(self): | 112 | def lambda_config(self): |
62 | - return self.config.get('lambda', None) | 113 | + return self.config.get('lambda') |
114 | + | ||
115 | + @property | ||
116 | + def test_dir(self): | ||
117 | + return self.config.get('tests', '_tests') | ||
118 | + | ||
119 | + @property | ||
120 | + def source_dir(self): | ||
121 | + return self.config.get('source', '_src') | ||
122 | + | ||
123 | + @property | ||
124 | + def unit_test_runner(self): | ||
125 | + return self.config.get('unit_test_runner', | ||
126 | + 'nosetests . ../{}/unit/'.format(self.test_dir)) | ||
63 | 127 | ||
64 | @property | 128 | @property |
65 | def exec_role_arn(self): | 129 | def exec_role_arn(self): |
... | @@ -92,8 +156,9 @@ class Context(object): | ... | @@ -92,8 +156,9 @@ class Context(object): |
92 | log.addHandler(ch) | 156 | log.addHandler(ch) |
93 | 157 | ||
94 | def _create_event_sources(self): | 158 | def _create_event_sources(self): |
95 | - if 'event_sources' in self.config['lambda']: | 159 | + env_cfg = self.config['environments'][self.environment] |
96 | - for event_source_cfg in self.config['lambda']['event_sources']: | 160 | + if 'event_sources' in env_cfg: |
161 | + for event_source_cfg in env_cfg['event_sources']: | ||
97 | _, _, svc, _ = event_source_cfg['arn'].split(':', 3) | 162 | _, _, svc, _ = event_source_cfg['arn'].split(':', 3) |
98 | if svc == 'kinesis': | 163 | if svc == 'kinesis': |
99 | self.event_sources.append( | 164 | self.event_sources.append( |
... | @@ -122,6 +187,23 @@ class Context(object): | ... | @@ -122,6 +187,23 @@ class Context(object): |
122 | for event_source in self.event_sources: | 187 | for event_source in self.event_sources: |
123 | event_source.update(self.function) | 188 | event_source.update(self.function) |
124 | 189 | ||
190 | + def list_event_sources(self): | ||
191 | + event_sources = [] | ||
192 | + for event_source in self.event_sources: | ||
193 | + event_sources.append({'arn': event_source.arn, | ||
194 | + 'starting_position': event_source.starting_position, | ||
195 | + 'batch_size': event_source.batch_size, | ||
196 | + 'enabled': event_source.enabled}) | ||
197 | + return event_sources | ||
198 | + | ||
199 | + def enable_event_sources(self): | ||
200 | + for event_source in self.event_sources: | ||
201 | + event_source.enable(self.function) | ||
202 | + | ||
203 | + def disable_event_sources(self): | ||
204 | + for event_source in self.event_sources: | ||
205 | + event_source.enable(self.function) | ||
206 | + | ||
125 | def create(self): | 207 | def create(self): |
126 | if self.policy: | 208 | if self.policy: |
127 | self.policy.create() | 209 | self.policy.create() |
... | @@ -133,12 +215,31 @@ class Context(object): | ... | @@ -133,12 +215,31 @@ class Context(object): |
133 | LOG.debug('Waiting for policy/role propogation') | 215 | LOG.debug('Waiting for policy/role propogation') |
134 | time.sleep(5) | 216 | time.sleep(5) |
135 | self.function.create() | 217 | self.function.create() |
218 | + self.add_event_sources() | ||
219 | + | ||
220 | + def deploy(self): | ||
221 | + if self.policy: | ||
222 | + self.policy.deploy() | ||
223 | + if self.role: | ||
224 | + self.role.create() | ||
225 | + self.function.deploy() | ||
226 | + if self.restapi: | ||
227 | + self.restapi.deploy() | ||
228 | + | ||
229 | + def invoke(self, data): | ||
230 | + return self.function.invoke(data) | ||
136 | 231 | ||
137 | - def update_code(self): | 232 | + def unit_tests(self): |
138 | - self.function.update() | 233 | + # run any unit tests |
234 | + unit_test_path = os.path.join(self.test_dir, 'unit') | ||
235 | + if os.path.exists(unit_test_path): | ||
236 | + os.chdir(self.source_dir) | ||
237 | + print('running unit tests') | ||
238 | + pipe = os.popen(self.unit_test_runner, 'r') | ||
239 | + print(pipe.read()) | ||
139 | 240 | ||
140 | - def invoke(self): | 241 | + def test(self): |
141 | - return self.function.invoke() | 242 | + return self.unit_tests() |
142 | 243 | ||
143 | def dryrun(self): | 244 | def dryrun(self): |
144 | return self.function.dryrun() | 245 | return self.function.dryrun() |
... | @@ -154,12 +255,15 @@ class Context(object): | ... | @@ -154,12 +255,15 @@ class Context(object): |
154 | event_source.remove(self.function) | 255 | event_source.remove(self.function) |
155 | self.function.log.delete() | 256 | self.function.log.delete() |
156 | self.function.delete() | 257 | self.function.delete() |
258 | + if self.restapi: | ||
259 | + self.restapi.delete() | ||
157 | time.sleep(5) | 260 | time.sleep(5) |
158 | if self.role: | 261 | if self.role: |
159 | self.role.delete() | 262 | self.role.delete() |
160 | time.sleep(5) | 263 | time.sleep(5) |
161 | if self.policy: | 264 | if self.policy: |
162 | self.policy.delete() | 265 | self.policy.delete() |
266 | + self._delete_cache() | ||
163 | 267 | ||
164 | def status(self): | 268 | def status(self): |
165 | status = {} | 269 | status = {} | ... | ... |
1 | -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ | 1 | +# Copyright (c) 2014, 2015 Mitch Garnaat |
2 | # | 2 | # |
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4 | -# may not use this file except in compliance with the License. A copy of | 4 | +# you may not use this file except in compliance with the License. |
5 | -# the License is located at | 5 | +# You may obtain a copy of the License at |
6 | # | 6 | # |
7 | -# http://aws.amazon.com/apache2.0/ | 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | 8 | # |
9 | -# or in the "license" file accompanying this file. This file is | 9 | +# Unless required by applicable law or agreed to in writing, software |
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
11 | -# ANY KIND, either express or implied. See the License for the specific | 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | -# language governing permissions and limitations under the License. | 12 | +# See the License for the specific language governing permissions and |
13 | +# limitations under the License. | ||
13 | 14 | ||
14 | import logging | 15 | import logging |
15 | 16 | ||
16 | from botocore.exceptions import ClientError | 17 | from botocore.exceptions import ClientError |
17 | 18 | ||
18 | -import kappa.aws | 19 | +import kappa.awsclient |
19 | 20 | ||
20 | LOG = logging.getLogger(__name__) | 21 | LOG = logging.getLogger(__name__) |
21 | 22 | ||
... | @@ -32,7 +33,7 @@ class EventSource(object): | ... | @@ -32,7 +33,7 @@ class EventSource(object): |
32 | 33 | ||
33 | @property | 34 | @property |
34 | def starting_position(self): | 35 | def starting_position(self): |
35 | - return self._config.get('starting_position', 'TRIM_HORIZON') | 36 | + return self._config.get('starting_position', 'LATEST') |
36 | 37 | ||
37 | @property | 38 | @property |
38 | def batch_size(self): | 39 | def batch_size(self): |
... | @@ -40,19 +41,20 @@ class EventSource(object): | ... | @@ -40,19 +41,20 @@ class EventSource(object): |
40 | 41 | ||
41 | @property | 42 | @property |
42 | def enabled(self): | 43 | def enabled(self): |
43 | - return self._config.get('enabled', True) | 44 | + return self._config.get('enabled', False) |
44 | 45 | ||
45 | 46 | ||
46 | class KinesisEventSource(EventSource): | 47 | class KinesisEventSource(EventSource): |
47 | 48 | ||
48 | def __init__(self, context, config): | 49 | def __init__(self, context, config): |
49 | super(KinesisEventSource, self).__init__(context, config) | 50 | super(KinesisEventSource, self).__init__(context, config) |
50 | - aws = kappa.aws.get_aws(context) | 51 | + self._lambda = kappa.awsclient.create_client( |
51 | - self._lambda = aws.create_client('lambda') | 52 | + 'lambda', context.session) |
52 | 53 | ||
53 | def _get_uuid(self, function): | 54 | def _get_uuid(self, function): |
54 | uuid = None | 55 | uuid = None |
55 | - response = self._lambda.list_event_source_mappings( | 56 | + response = self._lambda.call( |
57 | + 'list_event_source_mappings', | ||
56 | FunctionName=function.name, | 58 | FunctionName=function.name, |
57 | EventSourceArn=self.arn) | 59 | EventSourceArn=self.arn) |
58 | LOG.debug(response) | 60 | LOG.debug(response) |
... | @@ -62,7 +64,8 @@ class KinesisEventSource(EventSource): | ... | @@ -62,7 +64,8 @@ class KinesisEventSource(EventSource): |
62 | 64 | ||
63 | def add(self, function): | 65 | def add(self, function): |
64 | try: | 66 | try: |
65 | - response = self._lambda.create_event_source_mapping( | 67 | + response = self._lambda.call( |
68 | + 'create_event_source_mapping', | ||
66 | FunctionName=function.name, | 69 | FunctionName=function.name, |
67 | EventSourceArn=self.arn, | 70 | EventSourceArn=self.arn, |
68 | BatchSize=self.batch_size, | 71 | BatchSize=self.batch_size, |
... | @@ -73,12 +76,37 @@ class KinesisEventSource(EventSource): | ... | @@ -73,12 +76,37 @@ class KinesisEventSource(EventSource): |
73 | except Exception: | 76 | except Exception: |
74 | LOG.exception('Unable to add event source') | 77 | LOG.exception('Unable to add event source') |
75 | 78 | ||
79 | + def enable(self, function): | ||
80 | + self._config['enabled'] = True | ||
81 | + try: | ||
82 | + response = self._lambda.call( | ||
83 | + 'update_event_source_mapping', | ||
84 | + FunctionName=function.name, | ||
85 | + Enabled=self.enabled | ||
86 | + ) | ||
87 | + LOG.debug(response) | ||
88 | + except Exception: | ||
89 | + LOG.exception('Unable to enable event source') | ||
90 | + | ||
91 | + def disable(self, function): | ||
92 | + self._config['enabled'] = False | ||
93 | + try: | ||
94 | + response = self._lambda.call( | ||
95 | + 'update_event_source_mapping', | ||
96 | + FunctionName=function.name, | ||
97 | + Enabled=self.enabled | ||
98 | + ) | ||
99 | + LOG.debug(response) | ||
100 | + except Exception: | ||
101 | + LOG.exception('Unable to disable event source') | ||
102 | + | ||
76 | def update(self, function): | 103 | def update(self, function): |
77 | response = None | 104 | response = None |
78 | uuid = self._get_uuid(function) | 105 | uuid = self._get_uuid(function) |
79 | if uuid: | 106 | if uuid: |
80 | try: | 107 | try: |
81 | - response = self._lambda.update_event_source_mapping( | 108 | + response = self._lambda.call( |
109 | + 'update_event_source_mapping', | ||
82 | BatchSize=self.batch_size, | 110 | BatchSize=self.batch_size, |
83 | Enabled=self.enabled, | 111 | Enabled=self.enabled, |
84 | FunctionName=function.arn) | 112 | FunctionName=function.arn) |
... | @@ -90,7 +118,8 @@ class KinesisEventSource(EventSource): | ... | @@ -90,7 +118,8 @@ class KinesisEventSource(EventSource): |
90 | response = None | 118 | response = None |
91 | uuid = self._get_uuid(function) | 119 | uuid = self._get_uuid(function) |
92 | if uuid: | 120 | if uuid: |
93 | - response = self._lambda.delete_event_source_mapping( | 121 | + response = self._lambda.call( |
122 | + 'delete_event_source_mapping', | ||
94 | UUID=uuid) | 123 | UUID=uuid) |
95 | LOG.debug(response) | 124 | LOG.debug(response) |
96 | return response | 125 | return response |
... | @@ -101,7 +130,8 @@ class KinesisEventSource(EventSource): | ... | @@ -101,7 +130,8 @@ class KinesisEventSource(EventSource): |
101 | uuid = self._get_uuid(function) | 130 | uuid = self._get_uuid(function) |
102 | if uuid: | 131 | if uuid: |
103 | try: | 132 | try: |
104 | - response = self._lambda.get_event_source_mapping( | 133 | + response = self._lambda.call( |
134 | + 'get_event_source_mapping', | ||
105 | UUID=self._get_uuid(function)) | 135 | UUID=self._get_uuid(function)) |
106 | LOG.debug(response) | 136 | LOG.debug(response) |
107 | except ClientError: | 137 | except ClientError: |
... | @@ -121,8 +151,7 @@ class S3EventSource(EventSource): | ... | @@ -121,8 +151,7 @@ class S3EventSource(EventSource): |
121 | 151 | ||
122 | def __init__(self, context, config): | 152 | def __init__(self, context, config): |
123 | super(S3EventSource, self).__init__(context, config) | 153 | super(S3EventSource, self).__init__(context, config) |
124 | - aws = kappa.aws.get_aws(context) | 154 | + self._s3 = kappa.awsclient.create_client('s3', context.session) |
125 | - self._s3 = aws.create_client('s3') | ||
126 | 155 | ||
127 | def _make_notification_id(self, function_name): | 156 | def _make_notification_id(self, function_name): |
128 | return 'Kappa-%s-notification' % function_name | 157 | return 'Kappa-%s-notification' % function_name |
... | @@ -132,7 +161,7 @@ class S3EventSource(EventSource): | ... | @@ -132,7 +161,7 @@ class S3EventSource(EventSource): |
132 | 161 | ||
133 | def add(self, function): | 162 | def add(self, function): |
134 | notification_spec = { | 163 | notification_spec = { |
135 | - 'LambdaFunctionConfigurations':[ | 164 | + 'LambdaFunctionConfigurations': [ |
136 | { | 165 | { |
137 | 'Id': self._make_notification_id(function.name), | 166 | 'Id': self._make_notification_id(function.name), |
138 | 'Events': [e for e in self._config['events']], | 167 | 'Events': [e for e in self._config['events']], |
... | @@ -141,7 +170,8 @@ class S3EventSource(EventSource): | ... | @@ -141,7 +170,8 @@ class S3EventSource(EventSource): |
141 | ] | 170 | ] |
142 | } | 171 | } |
143 | try: | 172 | try: |
144 | - response = self._s3.put_bucket_notification_configuration( | 173 | + response = self._s3.call( |
174 | + 'put_bucket_notification_configuration', | ||
145 | Bucket=self._get_bucket_name(), | 175 | Bucket=self._get_bucket_name(), |
146 | NotificationConfiguration=notification_spec) | 176 | NotificationConfiguration=notification_spec) |
147 | LOG.debug(response) | 177 | LOG.debug(response) |
... | @@ -154,7 +184,8 @@ class S3EventSource(EventSource): | ... | @@ -154,7 +184,8 @@ class S3EventSource(EventSource): |
154 | 184 | ||
155 | def remove(self, function): | 185 | def remove(self, function): |
156 | LOG.debug('removing s3 notification') | 186 | LOG.debug('removing s3 notification') |
157 | - response = self._s3.get_bucket_notification( | 187 | + response = self._s3.call( |
188 | + 'get_bucket_notification', | ||
158 | Bucket=self._get_bucket_name()) | 189 | Bucket=self._get_bucket_name()) |
159 | LOG.debug(response) | 190 | LOG.debug(response) |
160 | if 'CloudFunctionConfiguration' in response: | 191 | if 'CloudFunctionConfiguration' in response: |
... | @@ -162,14 +193,16 @@ class S3EventSource(EventSource): | ... | @@ -162,14 +193,16 @@ class S3EventSource(EventSource): |
162 | if fn_arn == function.arn: | 193 | if fn_arn == function.arn: |
163 | del response['CloudFunctionConfiguration'] | 194 | del response['CloudFunctionConfiguration'] |
164 | del response['ResponseMetadata'] | 195 | del response['ResponseMetadata'] |
165 | - response = self._s3.put_bucket_notification( | 196 | + response = self._s3.call( |
197 | + 'put_bucket_notification', | ||
166 | Bucket=self._get_bucket_name(), | 198 | Bucket=self._get_bucket_name(), |
167 | NotificationConfiguration=response) | 199 | NotificationConfiguration=response) |
168 | LOG.debug(response) | 200 | LOG.debug(response) |
169 | 201 | ||
170 | def status(self, function): | 202 | def status(self, function): |
171 | LOG.debug('status for s3 notification for %s', function.name) | 203 | LOG.debug('status for s3 notification for %s', function.name) |
172 | - response = self._s3.get_bucket_notification( | 204 | + response = self._s3.call( |
205 | + 'get_bucket_notification', | ||
173 | Bucket=self._get_bucket_name()) | 206 | Bucket=self._get_bucket_name()) |
174 | LOG.debug(response) | 207 | LOG.debug(response) |
175 | if 'CloudFunctionConfiguration' not in response: | 208 | if 'CloudFunctionConfiguration' not in response: |
... | @@ -181,15 +214,15 @@ class SNSEventSource(EventSource): | ... | @@ -181,15 +214,15 @@ class SNSEventSource(EventSource): |
181 | 214 | ||
182 | def __init__(self, context, config): | 215 | def __init__(self, context, config): |
183 | super(SNSEventSource, self).__init__(context, config) | 216 | super(SNSEventSource, self).__init__(context, config) |
184 | - aws = kappa.aws.get_aws(context) | 217 | + self._sns = kappa.awsclient.create_client('sns', context.session) |
185 | - self._sns = aws.create_client('sns') | ||
186 | 218 | ||
187 | def _make_notification_id(self, function_name): | 219 | def _make_notification_id(self, function_name): |
188 | return 'Kappa-%s-notification' % function_name | 220 | return 'Kappa-%s-notification' % function_name |
189 | 221 | ||
190 | def exists(self, function): | 222 | def exists(self, function): |
191 | try: | 223 | try: |
192 | - response = self._sns.list_subscriptions_by_topic( | 224 | + response = self._sns.call( |
225 | + 'list_subscriptions_by_topic', | ||
193 | TopicArn=self.arn) | 226 | TopicArn=self.arn) |
194 | LOG.debug(response) | 227 | LOG.debug(response) |
195 | for subscription in response['Subscriptions']: | 228 | for subscription in response['Subscriptions']: |
... | @@ -201,7 +234,8 @@ class SNSEventSource(EventSource): | ... | @@ -201,7 +234,8 @@ class SNSEventSource(EventSource): |
201 | 234 | ||
202 | def add(self, function): | 235 | def add(self, function): |
203 | try: | 236 | try: |
204 | - response = self._sns.subscribe( | 237 | + response = self._sns.call( |
238 | + 'subscribe', | ||
205 | TopicArn=self.arn, Protocol='lambda', | 239 | TopicArn=self.arn, Protocol='lambda', |
206 | Endpoint=function.arn) | 240 | Endpoint=function.arn) |
207 | LOG.debug(response) | 241 | LOG.debug(response) |
... | @@ -216,7 +250,8 @@ class SNSEventSource(EventSource): | ... | @@ -216,7 +250,8 @@ class SNSEventSource(EventSource): |
216 | try: | 250 | try: |
217 | subscription = self.exists(function) | 251 | subscription = self.exists(function) |
218 | if subscription: | 252 | if subscription: |
219 | - response = self._sns.unsubscribe( | 253 | + response = self._sns.call( |
254 | + 'unsubscribe', | ||
220 | SubscriptionArn=subscription['SubscriptionArn']) | 255 | SubscriptionArn=subscription['SubscriptionArn']) |
221 | LOG.debug(response) | 256 | LOG.debug(response) |
222 | except Exception: | 257 | except Exception: | ... | ... |
This diff is collapsed. Click to expand it.
1 | -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ | 1 | +# Copyright (c) 2014, 2015 Mitch Garnaat |
2 | # | 2 | # |
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4 | -# may not use this file except in compliance with the License. A copy of | 4 | +# you may not use this file except in compliance with the License. |
5 | -# the License is located at | 5 | +# You may obtain a copy of the License at |
6 | # | 6 | # |
7 | -# http://aws.amazon.com/apache2.0/ | 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | 8 | # |
9 | -# or in the "license" file accompanying this file. This file is | 9 | +# Unless required by applicable law or agreed to in writing, software |
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
11 | -# ANY KIND, either express or implied. See the License for the specific | 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | -# language governing permissions and limitations under the License. | 12 | +# See the License for the specific language governing permissions and |
13 | +# limitations under the License. | ||
13 | 14 | ||
14 | import logging | 15 | import logging |
15 | 16 | ||
16 | from botocore.exceptions import ClientError | 17 | from botocore.exceptions import ClientError |
17 | 18 | ||
18 | -import kappa.aws | 19 | +import kappa.awsclient |
19 | 20 | ||
20 | LOG = logging.getLogger(__name__) | 21 | LOG = logging.getLogger(__name__) |
21 | 22 | ||
... | @@ -25,12 +26,12 @@ class Log(object): | ... | @@ -25,12 +26,12 @@ class Log(object): |
25 | def __init__(self, context, log_group_name): | 26 | def __init__(self, context, log_group_name): |
26 | self._context = context | 27 | self._context = context |
27 | self.log_group_name = log_group_name | 28 | self.log_group_name = log_group_name |
28 | - aws = kappa.aws.get_aws(self._context) | 29 | + self._log_client = kappa.awsclient.create_client( |
29 | - self._log_svc = aws.create_client('logs') | 30 | + 'logs', context.session) |
30 | 31 | ||
31 | def _check_for_log_group(self): | 32 | def _check_for_log_group(self): |
32 | LOG.debug('checking for log group') | 33 | LOG.debug('checking for log group') |
33 | - response = self._log_svc.describe_log_groups() | 34 | + response = self._log_client.call('describe_log_groups') |
34 | log_group_names = [lg['logGroupName'] for lg in response['logGroups']] | 35 | log_group_names = [lg['logGroupName'] for lg in response['logGroups']] |
35 | return self.log_group_name in log_group_names | 36 | return self.log_group_name in log_group_names |
36 | 37 | ||
... | @@ -40,7 +41,8 @@ class Log(object): | ... | @@ -40,7 +41,8 @@ class Log(object): |
40 | LOG.info( | 41 | LOG.info( |
41 | 'log group %s has not been created yet', self.log_group_name) | 42 | 'log group %s has not been created yet', self.log_group_name) |
42 | return [] | 43 | return [] |
43 | - response = self._log_svc.describe_log_streams( | 44 | + response = self._log_client.call( |
45 | + 'describe_log_streams', | ||
44 | logGroupName=self.log_group_name) | 46 | logGroupName=self.log_group_name) |
45 | LOG.debug(response) | 47 | LOG.debug(response) |
46 | return response['logStreams'] | 48 | return response['logStreams'] |
... | @@ -58,7 +60,8 @@ class Log(object): | ... | @@ -58,7 +60,8 @@ class Log(object): |
58 | latest_stream = stream | 60 | latest_stream = stream |
59 | elif stream['lastEventTimestamp'] > latest_stream['lastEventTimestamp']: | 61 | elif stream['lastEventTimestamp'] > latest_stream['lastEventTimestamp']: |
60 | latest_stream = stream | 62 | latest_stream = stream |
61 | - response = self._log_svc.get_log_events( | 63 | + response = self._log_client.call( |
64 | + 'get_log_events', | ||
62 | logGroupName=self.log_group_name, | 65 | logGroupName=self.log_group_name, |
63 | logStreamName=latest_stream['logStreamName']) | 66 | logStreamName=latest_stream['logStreamName']) |
64 | LOG.debug(response) | 67 | LOG.debug(response) |
... | @@ -66,7 +69,8 @@ class Log(object): | ... | @@ -66,7 +69,8 @@ class Log(object): |
66 | 69 | ||
67 | def delete(self): | 70 | def delete(self): |
68 | try: | 71 | try: |
69 | - response = self._log_svc.delete_log_group( | 72 | + response = self._log_client.call( |
73 | + 'delete_log_group', | ||
70 | logGroupName=self.log_group_name) | 74 | logGroupName=self.log_group_name) |
71 | LOG.debug(response) | 75 | LOG.debug(response) |
72 | except ClientError: | 76 | except ClientError: | ... | ... |
1 | -# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/ | 1 | +# Copyright (c) 2014, 2015 Mitch Garnaat |
2 | # | 2 | # |
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4 | -# may not use this file except in compliance with the License. A copy of | 4 | +# you may not use this file except in compliance with the License. |
5 | -# the License is located at | 5 | +# You may obtain a copy of the License at |
6 | # | 6 | # |
7 | -# http://aws.amazon.com/apache2.0/ | 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | 8 | # |
9 | -# or in the "license" file accompanying this file. This file is | 9 | +# Unless required by applicable law or agreed to in writing, software |
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
11 | -# ANY KIND, either express or implied. See the License for the specific | 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | -# language governing permissions and limitations under the License. | 12 | +# See the License for the specific language governing permissions and |
13 | +# limitations under the License. | ||
13 | 14 | ||
14 | import logging | 15 | import logging |
16 | +import json | ||
17 | +import hashlib | ||
15 | 18 | ||
16 | -import kappa.aws | 19 | +import kappa.awsclient |
17 | 20 | ||
18 | LOG = logging.getLogger(__name__) | 21 | LOG = logging.getLogger(__name__) |
19 | 22 | ||
20 | 23 | ||
21 | class Policy(object): | 24 | class Policy(object): |
22 | 25 | ||
26 | + _path_prefix = '/kappa/' | ||
27 | + | ||
23 | def __init__(self, context, config): | 28 | def __init__(self, context, config): |
24 | - self._context = context | 29 | + self.context = context |
25 | - self._config = config | 30 | + self.config = config |
26 | - aws = kappa.aws.get_aws(context) | 31 | + self._iam_client = kappa.awsclient.create_client( |
27 | - self._iam_svc = aws.create_client('iam') | 32 | + 'iam', self.context.session) |
28 | - self._arn = None | 33 | + self._arn = self.config['policy'].get('arn', None) |
29 | 34 | ||
30 | @property | 35 | @property |
31 | def name(self): | 36 | def name(self): |
32 | - return self._config['name'] | 37 | + return '{}_{}'.format(self.context.name, self.context.environment) |
33 | 38 | ||
34 | @property | 39 | @property |
35 | def description(self): | 40 | def description(self): |
36 | - return self._config.get('description', None) | 41 | + return 'A kappa policy to control access to {} resources'.format( |
42 | + self.context.environment) | ||
37 | 43 | ||
38 | - @property | ||
39 | def document(self): | 44 | def document(self): |
40 | - return self._config.get('document', None) | 45 | + if ('resources' not in self.config['policy'] and |
41 | - | 46 | + 'statements' not in self.config['policy']): |
42 | - @property | 47 | + return None |
43 | - def path(self): | 48 | + document = {'Version': '2012-10-17'} |
44 | - return self._config.get('path', '/kappa/') | 49 | + statements = [] |
50 | + document['Statement'] = statements | ||
51 | + for resource in self.config['policy']['resources']: | ||
52 | + arn = resource['arn'] | ||
53 | + _, _, service, _ = arn.split(':', 3) | ||
54 | + statement = {"Effect": "Allow", | ||
55 | + "Resource": resource['arn']} | ||
56 | + actions = [] | ||
57 | + for action in resource['actions']: | ||
58 | + actions.append("{}:{}".format(service, action)) | ||
59 | + statement['Action'] = actions | ||
60 | + statements.append(statement) | ||
61 | + for statement in self.config['policy'].get('statements', []): | ||
62 | + statements.append(statement) | ||
63 | + return json.dumps(document, indent=2, sort_keys=True) | ||
45 | 64 | ||
46 | @property | 65 | @property |
47 | def arn(self): | 66 | def arn(self): |
... | @@ -52,20 +71,23 @@ class Policy(object): | ... | @@ -52,20 +71,23 @@ class Policy(object): |
52 | return self._arn | 71 | return self._arn |
53 | 72 | ||
54 | def _find_all_policies(self): | 73 | def _find_all_policies(self): |
55 | - # boto3 does not currently do pagination | ||
56 | - # so we have to do it ourselves | ||
57 | - policies = [] | ||
58 | try: | 74 | try: |
59 | - response = self._iam_svc.list_policies() | 75 | + response = self._iam_client.call( |
60 | - policies += response['Policies'] | 76 | + 'list_policies', PathPrefix=self._path_prefix) |
61 | - while response['IsTruncated']: | ||
62 | - LOG.debug('getting another page of policies') | ||
63 | - response = self._iam_svc.list_policies( | ||
64 | - Marker=response['Marker']) | ||
65 | - policies += response['Policies'] | ||
66 | except Exception: | 77 | except Exception: |
67 | LOG.exception('Error listing policies') | 78 | LOG.exception('Error listing policies') |
68 | - return policies | 79 | + response = {} |
80 | + return response.get('Policies', list()) | ||
81 | + | ||
82 | + def _list_versions(self): | ||
83 | + try: | ||
84 | + response = self._iam_client.call( | ||
85 | + 'list_policy_versions', | ||
86 | + PolicyArn=self.arn) | ||
87 | + except Exception: | ||
88 | + LOG.exception('Error listing policy versions') | ||
89 | + response = {} | ||
90 | + return response.get('Versions', list()) | ||
69 | 91 | ||
70 | def exists(self): | 92 | def exists(self): |
71 | for policy in self._find_all_policies(): | 93 | for policy in self._find_all_policies(): |
... | @@ -73,15 +95,63 @@ class Policy(object): | ... | @@ -73,15 +95,63 @@ class Policy(object): |
73 | return policy | 95 | return policy |
74 | return None | 96 | return None |
75 | 97 | ||
76 | - def create(self): | 98 | + def _add_policy_version(self): |
77 | - LOG.debug('creating policy %s', self.name) | 99 | + document = self.document() |
100 | + if not document: | ||
101 | + LOG.debug('not a custom policy, no need to version it') | ||
102 | + return | ||
103 | + versions = self._list_versions() | ||
104 | + if len(versions) == 5: | ||
105 | + try: | ||
106 | + response = self._iam_client.call( | ||
107 | + 'delete_policy_version', | ||
108 | + PolicyArn=self.arn, | ||
109 | + VersionId=versions[-1]['VersionId']) | ||
110 | + except Exception: | ||
111 | + LOG.exception('Unable to delete policy version') | ||
112 | + # update policy with a new version here | ||
113 | + try: | ||
114 | + response = self._iam_client.call( | ||
115 | + 'create_policy_version', | ||
116 | + PolicyArn=self.arn, | ||
117 | + PolicyDocument=document, | ||
118 | + SetAsDefault=True) | ||
119 | + LOG.debug(response) | ||
120 | + except Exception: | ||
121 | + LOG.exception('Error creating new Policy version') | ||
122 | + | ||
123 | + def _check_md5(self, document): | ||
124 | + m = hashlib.md5() | ||
125 | + m.update(document.encode('utf-8')) | ||
126 | + policy_md5 = m.hexdigest() | ||
127 | + cached_md5 = self.context.get_cache_value('policy_md5') | ||
128 | + LOG.debug('policy_md5: %s', policy_md5) | ||
129 | + LOG.debug('cached md5: %s', cached_md5) | ||
130 | + if policy_md5 != cached_md5: | ||
131 | + self.context.set_cache_value('policy_md5', policy_md5) | ||
132 | + return True | ||
133 | + return False | ||
134 | + | ||
135 | + def deploy(self): | ||
136 | + LOG.info('deploying policy %s', self.name) | ||
137 | + document = self.document() | ||
138 | + if not document: | ||
139 | + LOG.info('not a custom policy, no need to create it') | ||
140 | + return | ||
78 | policy = self.exists() | 141 | policy = self.exists() |
79 | - if not policy and self.document: | 142 | + if policy: |
80 | - with open(self.document, 'rb') as fp: | 143 | + if self._check_md5(document): |
144 | + self._add_policy_version() | ||
145 | + else: | ||
146 | + LOG.info('policy unchanged') | ||
147 | + else: | ||
148 | + # create a new policy | ||
149 | + self._check_md5(document) | ||
81 | try: | 150 | try: |
82 | - response = self._iam_svc.create_policy( | 151 | + response = self._iam_client.call( |
83 | - Path=self.path, PolicyName=self.name, | 152 | + 'create_policy', |
84 | - PolicyDocument=fp.read(), | 153 | + Path=self._path_prefix, PolicyName=self.name, |
154 | + PolicyDocument=document, | ||
85 | Description=self.description) | 155 | Description=self.description) |
86 | LOG.debug(response) | 156 | LOG.debug(response) |
87 | except Exception: | 157 | except Exception: |
... | @@ -91,9 +161,25 @@ class Policy(object): | ... | @@ -91,9 +161,25 @@ class Policy(object): |
91 | response = None | 161 | response = None |
92 | # Only delete the policy if it has a document associated with it. | 162 | # Only delete the policy if it has a document associated with it. |
93 | # This indicates that it was a custom policy created by kappa. | 163 | # This indicates that it was a custom policy created by kappa. |
94 | - if self.arn and self.document: | 164 | + document = self.document() |
95 | - LOG.debug('deleting policy %s', self.name) | 165 | + if self.arn and document: |
96 | - response = self._iam_svc.delete_policy(PolicyArn=self.arn) | 166 | + LOG.info('deleting policy %s', self.name) |
167 | + LOG.info('deleting all policy versions for %s', self.name) | ||
168 | + versions = self._list_versions() | ||
169 | + for version in versions: | ||
170 | + LOG.debug('deleting version %s', version['VersionId']) | ||
171 | + if not version['IsDefaultVersion']: | ||
172 | + try: | ||
173 | + response = self._iam_client.call( | ||
174 | + 'delete_policy_version', | ||
175 | + PolicyArn=self.arn, | ||
176 | + VersionId=version['VersionId']) | ||
177 | + except Exception: | ||
178 | + LOG.exception('Unable to delete policy version %s', | ||
179 | + version['VersionId']) | ||
180 | + LOG.debug('now delete policy') | ||
181 | + response = self._iam_client.call( | ||
182 | + 'delete_policy', PolicyArn=self.arn) | ||
97 | LOG.debug(response) | 183 | LOG.debug(response) |
98 | return response | 184 | return response |
99 | 185 | ... | ... |
kappa/restapi.py
0 → 100644
1 | +# Copyright (c) 2014, 2015 Mitch Garnaat | ||
2 | +# | ||
3 | +# Licensed under the Apache License, Version 2.0 (the "License"); | ||
4 | +# you may not use this file except in compliance with the License. | ||
5 | +# You may obtain a copy of the License at | ||
6 | +# | ||
7 | +# http://www.apache.org/licenses/LICENSE-2.0 | ||
8 | +# | ||
9 | +# Unless required by applicable law or agreed to in writing, software | ||
10 | +# distributed under the License is distributed on an "AS IS" BASIS, | ||
11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
12 | +# See the License for the specific language governing permissions and | ||
13 | +# limitations under the License. | ||
14 | + | ||
15 | +import logging | ||
16 | + | ||
17 | +from botocore.exceptions import ClientError | ||
18 | + | ||
19 | +import kappa.awsclient | ||
20 | +import kappa.log | ||
21 | + | ||
22 | +LOG = logging.getLogger(__name__) | ||
23 | + | ||
24 | + | ||
25 | +class RestApi(object): | ||
26 | + | ||
27 | + def __init__(self, context, config): | ||
28 | + self._context = context | ||
29 | + self._config = config | ||
30 | + self._apigateway_client = kappa.awsclient.create_client( | ||
31 | + 'apigateway', context.session) | ||
32 | + self._api = None | ||
33 | + self._resources = None | ||
34 | + self._resource = None | ||
35 | + | ||
36 | + @property | ||
37 | + def arn(self): | ||
38 | + _, _, _, region, account, _ = self._context.function.arn.split(':', 5) | ||
39 | + arn = 'arn:aws:execute-api:{}:{}:{}/*/*/{}'.format( | ||
40 | + region, account, self.api_id, self.resource_name) | ||
41 | + return arn | ||
42 | + | ||
43 | + @property | ||
44 | + def api_name(self): | ||
45 | + return self._config['name'] | ||
46 | + | ||
47 | + @property | ||
48 | + def description(self): | ||
49 | + return self._config['description'] | ||
50 | + | ||
51 | + @property | ||
52 | + def resource_name(self): | ||
53 | + return self._config['resource']['name'] | ||
54 | + | ||
55 | + @property | ||
56 | + def parent_resource(self): | ||
57 | + return self._config['resource']['parent'] | ||
58 | + | ||
59 | + @property | ||
60 | + def full_path(self): | ||
61 | + parts = self.parent_resource.split('/') | ||
62 | + parts.append(self.resource_name) | ||
63 | + return '/'.join(parts) | ||
64 | + | ||
65 | + @property | ||
66 | + def api_id(self): | ||
67 | + api = self._get_api() | ||
68 | + return api.get('id') | ||
69 | + | ||
70 | + @property | ||
71 | + def resource_id(self): | ||
72 | + resources = self._get_resources() | ||
73 | + return resources.get(self.full_path).get('id') | ||
74 | + | ||
75 | + def _get_api(self): | ||
76 | + if self._api is None: | ||
77 | + try: | ||
78 | + response = self._apigateway_client.call( | ||
79 | + 'get_rest_apis') | ||
80 | + LOG.debug(response) | ||
81 | + for item in response['items']: | ||
82 | + if item['name'] == self.api_name: | ||
83 | + self._api = item | ||
84 | + except Exception: | ||
85 | + LOG.exception('Error finding restapi') | ||
86 | + return self._api | ||
87 | + | ||
88 | + def _get_resources(self): | ||
89 | + if self._resources is None: | ||
90 | + try: | ||
91 | + response = self._apigateway_client.call( | ||
92 | + 'get_resources', | ||
93 | + restApiId=self.api_id) | ||
94 | + LOG.debug(response) | ||
95 | + self._resources = {} | ||
96 | + for item in response['items']: | ||
97 | + self._resources[item['path']] = item | ||
98 | + except Exception: | ||
99 | + LOG.exception('Unable to find resources for: %s', | ||
100 | + self.api_name) | ||
101 | + return self._resources | ||
102 | + | ||
103 | + def create_restapi(self): | ||
104 | + if not self.api_exists(): | ||
105 | + LOG.info('creating restapi %s', self.api_name) | ||
106 | + try: | ||
107 | + response = self._apigateway_client.call( | ||
108 | + 'create_rest_api', | ||
109 | + name=self.api_name, | ||
110 | + description=self.description) | ||
111 | + LOG.debug(response) | ||
112 | + except Exception: | ||
113 | + LOG.exception('Unable to create new restapi') | ||
114 | + | ||
115 | + def create_resource_path(self): | ||
116 | + path = self.full_path | ||
117 | + parts = path.split('/') | ||
118 | + resources = self._get_resources() | ||
119 | + parent = None | ||
120 | + build_path = [] | ||
121 | + for part in parts: | ||
122 | + LOG.debug('part=%s', part) | ||
123 | + build_path.append(part) | ||
124 | + LOG.debug('build_path=%s', build_path) | ||
125 | + full_path = '/'.join(build_path) | ||
126 | + LOG.debug('full_path=%s', full_path) | ||
127 | + if full_path is '': | ||
128 | + parent = resources['/'] | ||
129 | + else: | ||
130 | + if full_path not in resources and parent: | ||
131 | + try: | ||
132 | + response = self._apigateway_client.call( | ||
133 | + 'create_resource', | ||
134 | + restApiId=self.api_id, | ||
135 | + parentId=parent['id'], | ||
136 | + pathPart=part) | ||
137 | + LOG.debug(response) | ||
138 | + resources[full_path] = response | ||
139 | + except Exception: | ||
140 | + LOG.exception('Unable to create new resource') | ||
141 | + parent = resources[full_path] | ||
142 | + self._item = resources[path] | ||
143 | + | ||
144 | + def create_method(self, method, config): | ||
145 | + LOG.info('creating method: %s', method) | ||
146 | + try: | ||
147 | + response = self._apigateway_client.call( | ||
148 | + 'put_method', | ||
149 | + restApiId=self.api_id, | ||
150 | + resourceId=self.resource_id, | ||
151 | + httpMethod=method, | ||
152 | + authorizationType=config.get('authorization_type'), | ||
153 | + apiKeyRequired=config.get('apikey_required', False) | ||
154 | + ) | ||
155 | + LOG.debug(response) | ||
156 | + LOG.debug('now create integration') | ||
157 | + uri = 'arn:aws:apigateway:{}:'.format( | ||
158 | + self._apigateway_client.region_name) | ||
159 | + uri += 'lambda:path/2015-03-31/functions/' | ||
160 | + uri += self._context.function.arn | ||
161 | + uri += ':${stageVariables.environment}/invocations' | ||
162 | + LOG.debug(uri) | ||
163 | + response = self._apigateway_client.call( | ||
164 | + 'put_integration', | ||
165 | + restApiId=self.api_id, | ||
166 | + resourceId=self.resource_id, | ||
167 | + httpMethod=method, | ||
168 | + integrationHttpMethod=method, | ||
169 | + type='AWS', | ||
170 | + uri=uri | ||
171 | + ) | ||
172 | + except Exception: | ||
173 | + LOG.exception('Unable to create integration: %s', method) | ||
174 | + | ||
175 | + def create_deployment(self): | ||
176 | + LOG.info('creating a deployment for %s to stage: %s', | ||
177 | + self.api_name, self._context.environment) | ||
178 | + try: | ||
179 | + response = self._apigateway_client.call( | ||
180 | + 'create_deployment', | ||
181 | + restApiId=self.api_id, | ||
182 | + stageName=self._context.environment | ||
183 | + ) | ||
184 | + LOG.debug(response) | ||
185 | + LOG.info('Now deployed to: %s', self.deployment_uri) | ||
186 | + except Exception: | ||
187 | + LOG.exception('Unable to create a deployment') | ||
188 | + | ||
189 | + def create_methods(self): | ||
190 | + resource_config = self._config['resource'] | ||
191 | + for method in resource_config.get('methods', dict()): | ||
192 | + if not self.method_exists(method): | ||
193 | + method_config = resource_config['methods'][method] | ||
194 | + self.create_method(method, method_config) | ||
195 | + | ||
196 | + def api_exists(self): | ||
197 | + return self._get_api() | ||
198 | + | ||
199 | + def resource_exists(self): | ||
200 | + resources = self._get_resources() | ||
201 | + return resources.get(self.full_path) | ||
202 | + | ||
203 | + def method_exists(self, method): | ||
204 | + exists = False | ||
205 | + resource = self.resource_exists() | ||
206 | + if resource: | ||
207 | + methods = resource.get('resourceMethods') | ||
208 | + if methods: | ||
209 | + for method_name in methods: | ||
210 | + if method_name == method: | ||
211 | + exists = True | ||
212 | + return exists | ||
213 | + | ||
214 | + def find_parent_resource_id(self): | ||
215 | + parent_id = None | ||
216 | + resources = self._get_resources() | ||
217 | + for item in resources: | ||
218 | + if item['path'] == self.parent: | ||
219 | + parent_id = item['id'] | ||
220 | + return parent_id | ||
221 | + | ||
222 | + def api_update(self): | ||
223 | + LOG.info('updating restapi %s', self.api_name) | ||
224 | + | ||
225 | + def resource_update(self): | ||
226 | + LOG.info('updating resource %s', self.full_path) | ||
227 | + | ||
228 | + def add_permission(self): | ||
229 | + LOG.info('Adding permission for APIGateway to call function') | ||
230 | + self._context.function.add_permission( | ||
231 | + action='lambda:InvokeFunction', | ||
232 | + principal='apigateway.amazonaws.com', | ||
233 | + source_arn=self.arn) | ||
234 | + | ||
235 | + def deploy(self): | ||
236 | + if self.api_exists(): | ||
237 | + self.api_update() | ||
238 | + else: | ||
239 | + self.create_restapi() | ||
240 | + if self.resource_exists(): | ||
241 | + self.resource_update() | ||
242 | + else: | ||
243 | + self.create_resource_path() | ||
244 | + self.create_methods() | ||
245 | + self.add_permission() | ||
246 | + | ||
247 | + def delete(self): | ||
248 | + LOG.info('deleting resource %s', self.resource_name) | ||
249 | + try: | ||
250 | + response = self._apigateway_client.call( | ||
251 | + 'delete_resource', | ||
252 | + restApiId=self.api_id, | ||
253 | + resourceId=self.resource_id) | ||
254 | + LOG.debug(response) | ||
255 | + except ClientError: | ||
256 | + LOG.exception('Unable to delete resource %s', self.resource_name) | ||
257 | + return response | ||
258 | + | ||
259 | + def status(self): | ||
260 | + try: | ||
261 | + response = self._apigateway_client.call( | ||
262 | + 'delete_', | ||
263 | + FunctionName=self.name) | ||
264 | + LOG.debug(response) | ||
265 | + except ClientError: | ||
266 | + LOG.exception('function %s not found', self.name) | ||
267 | + response = None | ||
268 | + return response |
1 | -# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/ | 1 | +# Copyright (c) 2014, 2015 Mitch Garnaat |
2 | # | 2 | # |
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
4 | -# may not use this file except in compliance with the License. A copy of | 4 | +# you may not use this file except in compliance with the License. |
5 | -# the License is located at | 5 | +# You may obtain a copy of the License at |
6 | # | 6 | # |
7 | -# http://aws.amazon.com/apache2.0/ | 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | 8 | # |
9 | -# or in the "license" file accompanying this file. This file is | 9 | +# Unless required by applicable law or agreed to in writing, software |
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
11 | -# ANY KIND, either express or implied. See the License for the specific | 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | -# language governing permissions and limitations under the License. | 12 | +# See the License for the specific language governing permissions and |
13 | +# limitations under the License. | ||
13 | 14 | ||
14 | import logging | 15 | import logging |
15 | 16 | ||
16 | from botocore.exceptions import ClientError | 17 | from botocore.exceptions import ClientError |
17 | 18 | ||
18 | -import kappa.aws | 19 | +import kappa.awsclient |
19 | 20 | ||
20 | LOG = logging.getLogger(__name__) | 21 | LOG = logging.getLogger(__name__) |
21 | 22 | ||
... | @@ -39,20 +40,20 @@ class Role(object): | ... | @@ -39,20 +40,20 @@ class Role(object): |
39 | def __init__(self, context, config): | 40 | def __init__(self, context, config): |
40 | self._context = context | 41 | self._context = context |
41 | self._config = config | 42 | self._config = config |
42 | - aws = kappa.aws.get_aws(context) | 43 | + self._iam_client = kappa.awsclient.create_client( |
43 | - self._iam_svc = aws.create_client('iam') | 44 | + 'iam', context.session) |
44 | self._arn = None | 45 | self._arn = None |
45 | 46 | ||
46 | @property | 47 | @property |
47 | def name(self): | 48 | def name(self): |
48 | - return self._config['name'] | 49 | + return '{}_{}'.format(self._context.name, self._context.environment) |
49 | 50 | ||
50 | @property | 51 | @property |
51 | def arn(self): | 52 | def arn(self): |
52 | if self._arn is None: | 53 | if self._arn is None: |
53 | try: | 54 | try: |
54 | - response = self._iam_svc.get_role( | 55 | + response = self._iam_client.call( |
55 | - RoleName=self.name) | 56 | + 'get_role', RoleName=self.name) |
56 | LOG.debug(response) | 57 | LOG.debug(response) |
57 | self._arn = response['Role']['Arn'] | 58 | self._arn = response['Role']['Arn'] |
58 | except Exception: | 59 | except Exception: |
... | @@ -60,20 +61,12 @@ class Role(object): | ... | @@ -60,20 +61,12 @@ class Role(object): |
60 | return self._arn | 61 | return self._arn |
61 | 62 | ||
62 | def _find_all_roles(self): | 63 | def _find_all_roles(self): |
63 | - # boto3 does not currently do pagination | ||
64 | - # so we have to do it ourselves | ||
65 | - roles = [] | ||
66 | try: | 64 | try: |
67 | - response = self._iam_svc.list_roles() | 65 | + response = self._iam_client.call('list_roles') |
68 | - roles += response['Roles'] | ||
69 | - while response['IsTruncated']: | ||
70 | - LOG.debug('getting another page of roles') | ||
71 | - response = self._iam_svc.list_roles( | ||
72 | - Marker=response['Marker']) | ||
73 | - roles += response['Roles'] | ||
74 | except Exception: | 66 | except Exception: |
75 | LOG.exception('Error listing roles') | 67 | LOG.exception('Error listing roles') |
76 | - return roles | 68 | + response = {} |
69 | + return response.get('Roles', list()) | ||
77 | 70 | ||
78 | def exists(self): | 71 | def exists(self): |
79 | for role in self._find_all_roles(): | 72 | for role in self._find_all_roles(): |
... | @@ -82,22 +75,26 @@ class Role(object): | ... | @@ -82,22 +75,26 @@ class Role(object): |
82 | return None | 75 | return None |
83 | 76 | ||
84 | def create(self): | 77 | def create(self): |
85 | - LOG.debug('creating role %s', self.name) | 78 | + LOG.info('creating role %s', self.name) |
86 | role = self.exists() | 79 | role = self.exists() |
87 | if not role: | 80 | if not role: |
88 | try: | 81 | try: |
89 | - response = self._iam_svc.create_role( | 82 | + response = self._iam_client.call( |
83 | + 'create_role', | ||
90 | Path=self.Path, RoleName=self.name, | 84 | Path=self.Path, RoleName=self.name, |
91 | AssumeRolePolicyDocument=AssumeRolePolicyDocument) | 85 | AssumeRolePolicyDocument=AssumeRolePolicyDocument) |
92 | LOG.debug(response) | 86 | LOG.debug(response) |
93 | if self._context.policy: | 87 | if self._context.policy: |
94 | LOG.debug('attaching policy %s', self._context.policy.arn) | 88 | LOG.debug('attaching policy %s', self._context.policy.arn) |
95 | - response = self._iam_svc.attach_role_policy( | 89 | + response = self._iam_client.call( |
90 | + 'attach_role_policy', | ||
96 | RoleName=self.name, | 91 | RoleName=self.name, |
97 | PolicyArn=self._context.policy.arn) | 92 | PolicyArn=self._context.policy.arn) |
98 | LOG.debug(response) | 93 | LOG.debug(response) |
99 | except ClientError: | 94 | except ClientError: |
100 | LOG.exception('Error creating Role') | 95 | LOG.exception('Error creating Role') |
96 | + else: | ||
97 | + LOG.info('role already exists') | ||
101 | 98 | ||
102 | def delete(self): | 99 | def delete(self): |
103 | response = None | 100 | response = None |
... | @@ -106,10 +103,12 @@ class Role(object): | ... | @@ -106,10 +103,12 @@ class Role(object): |
106 | LOG.debug('First detach the policy from the role') | 103 | LOG.debug('First detach the policy from the role') |
107 | policy_arn = self._context.policy.arn | 104 | policy_arn = self._context.policy.arn |
108 | if policy_arn: | 105 | if policy_arn: |
109 | - response = self._iam_svc.detach_role_policy( | 106 | + response = self._iam_client.call( |
107 | + 'detach_role_policy', | ||
110 | RoleName=self.name, PolicyArn=policy_arn) | 108 | RoleName=self.name, PolicyArn=policy_arn) |
111 | LOG.debug(response) | 109 | LOG.debug(response) |
112 | - response = self._iam_svc.delete_role(RoleName=self.name) | 110 | + response = self._iam_client.call( |
111 | + 'delete_role', RoleName=self.name) | ||
113 | LOG.debug(response) | 112 | LOG.debug(response) |
114 | except ClientError: | 113 | except ClientError: |
115 | LOG.exception('role %s not found', self.name) | 114 | LOG.exception('role %s not found', self.name) |
... | @@ -118,7 +117,8 @@ class Role(object): | ... | @@ -118,7 +117,8 @@ class Role(object): |
118 | def status(self): | 117 | def status(self): |
119 | LOG.debug('getting status for role %s', self.name) | 118 | LOG.debug('getting status for role %s', self.name) |
120 | try: | 119 | try: |
121 | - response = self._iam_svc.get_role(RoleName=self.name) | 120 | + response = self._iam_client.call( |
121 | + 'get_role', RoleName=self.name) | ||
122 | LOG.debug(response) | 122 | LOG.debug(response) |
123 | except ClientError: | 123 | except ClientError: |
124 | LOG.debug('role %s not found', self.name) | 124 | LOG.debug('role %s not found', self.name) | ... | ... |
kappa/scripts/__init__.py
0 → 100644
1 | +# Copyright (c) 2014, 2015 Mitch Garnaat | ||
2 | +# | ||
3 | +# Licensed under the Apache License, Version 2.0 (the "License"); | ||
4 | +# you may not use this file except in compliance with the License. | ||
5 | +# You may obtain a copy of the License at | ||
6 | +# | ||
7 | +# http://www.apache.org/licenses/LICENSE-2.0 | ||
8 | +# | ||
9 | +# Unless required by applicable law or agreed to in writing, software | ||
10 | +# distributed under the License is distributed on an "AS IS" BASIS, | ||
11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
12 | +# See the License for the specific language governing permissions and | ||
13 | +# limitations under the License. |
kappa/scripts/cli.py
0 → 100755
1 | +#!/usr/bin/env python | ||
2 | +# Copyright (c) 2014, 2015 Mitch Garnaat http://garnaat.org/ | ||
3 | +# | ||
4 | +# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
5 | +# may not use this file except in compliance with the License. A copy of | ||
6 | +# the License is located at | ||
7 | +# | ||
8 | +# http://aws.amazon.com/apache2.0/ | ||
9 | +# | ||
10 | +# or in the "license" file accompanying this file. This file is | ||
11 | +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
12 | +# ANY KIND, either express or implied. See the License for the specific | ||
13 | +# language governing permissions and limitations under the License. | ||
14 | + | ||
15 | +from datetime import datetime | ||
16 | +import base64 | ||
17 | + | ||
18 | +import click | ||
19 | + | ||
20 | +from kappa.context import Context | ||
21 | + | ||
22 | +pass_ctx = click.make_pass_decorator(Context) | ||
23 | + | ||
24 | + | ||
25 | +@click.group() | ||
26 | +@click.option( | ||
27 | + '--config', | ||
28 | + default='kappa.yml', | ||
29 | + type=click.File('rb'), | ||
30 | + envvar='KAPPA_CONFIG', | ||
31 | + help='Name of config file (default is kappa.yml)' | ||
32 | +) | ||
33 | +@click.option( | ||
34 | + '--debug/--no-debug', | ||
35 | + default=False, | ||
36 | + help='Turn on debugging output' | ||
37 | +) | ||
38 | +@click.option( | ||
39 | + '--env', | ||
40 | + default='dev', | ||
41 | + help='Specify which environment to work with (default dev)' | ||
42 | +) | ||
43 | +@click.option( | ||
44 | + '--record-path', | ||
45 | + type=click.Path(exists=True, file_okay=False, writable=True), | ||
46 | + help='Uses placebo to record AWS responses to this path' | ||
47 | +) | ||
48 | +@click.pass_context | ||
49 | +def cli(ctx, config=None, debug=False, env=None, record_path=None): | ||
50 | + ctx.obj = Context(config, env, debug, record_path) | ||
51 | + | ||
52 | + | ||
53 | +@cli.command() | ||
54 | +@pass_ctx | ||
55 | +def deploy(ctx): | ||
56 | + """Deploy the Lambda function and any policies and roles required""" | ||
57 | + click.echo('deploying') | ||
58 | + ctx.deploy() | ||
59 | + click.echo('done') | ||
60 | + | ||
61 | + | ||
62 | +@cli.command() | ||
63 | +@click.argument('data_file', type=click.File('r')) | ||
64 | +@pass_ctx | ||
65 | +def invoke(ctx, data_file): | ||
66 | + """Invoke the command synchronously""" | ||
67 | + click.echo('invoking') | ||
68 | + response = ctx.invoke(data_file.read()) | ||
69 | + log_data = base64.b64decode(response['LogResult']) | ||
70 | + click.echo(log_data) | ||
71 | + click.echo('Response:') | ||
72 | + click.echo(response['Payload'].read()) | ||
73 | + click.echo('done') | ||
74 | + | ||
75 | + | ||
76 | +@cli.command() | ||
77 | +@pass_ctx | ||
78 | +def test(ctx): | ||
79 | + """Test the command synchronously""" | ||
80 | + click.echo('testing') | ||
81 | + ctx.test() | ||
82 | + click.echo('done') | ||
83 | + | ||
84 | + | ||
85 | +@cli.command() | ||
86 | +@pass_ctx | ||
87 | +def tail(ctx): | ||
88 | + """Show the last 10 lines of the log file""" | ||
89 | + click.echo('tailing logs') | ||
90 | + for e in ctx.tail()[-10:]: | ||
91 | + ts = datetime.utcfromtimestamp(e['timestamp']//1000).isoformat() | ||
92 | + click.echo("{}: {}".format(ts, e['message'])) | ||
93 | + click.echo('done') | ||
94 | + | ||
95 | + | ||
96 | +@cli.command() | ||
97 | +@pass_ctx | ||
98 | +def status(ctx): | ||
99 | + """Print a status of this Lambda function""" | ||
100 | + status = ctx.status() | ||
101 | + click.echo(click.style('Policy', bold=True)) | ||
102 | + if status['policy']: | ||
103 | + line = ' {} ({})'.format( | ||
104 | + status['policy']['PolicyName'], | ||
105 | + status['policy']['Arn']) | ||
106 | + click.echo(click.style(line, fg='green')) | ||
107 | + click.echo(click.style('Role', bold=True)) | ||
108 | + if status['role']: | ||
109 | + line = ' {} ({})'.format( | ||
110 | + status['role']['Role']['RoleName'], | ||
111 | + status['role']['Role']['Arn']) | ||
112 | + click.echo(click.style(line, fg='green')) | ||
113 | + click.echo(click.style('Function', bold=True)) | ||
114 | + if status['function']: | ||
115 | + line = ' {} ({})'.format( | ||
116 | + status['function']['Configuration']['FunctionName'], | ||
117 | + status['function']['Configuration']['FunctionArn']) | ||
118 | + click.echo(click.style(line, fg='green')) | ||
119 | + else: | ||
120 | + click.echo(click.style(' None', fg='green')) | ||
121 | + click.echo(click.style('Event Sources', bold=True)) | ||
122 | + if status['event_sources']: | ||
123 | + for event_source in status['event_sources']: | ||
124 | + if event_source: | ||
125 | + line = ' {}: {}'.format( | ||
126 | + event_source['EventSourceArn'], event_source['State']) | ||
127 | + click.echo(click.style(line, fg='green')) | ||
128 | + else: | ||
129 | + click.echo(click.style(' None', fg='green')) | ||
130 | + | ||
131 | + | ||
132 | +@cli.command() | ||
133 | +@pass_ctx | ||
134 | +def delete(ctx): | ||
135 | + """Delete the Lambda function and related policies and roles""" | ||
136 | + click.echo('deleting') | ||
137 | + ctx.delete() | ||
138 | + click.echo('done') | ||
139 | + | ||
140 | + | ||
141 | +@cli.command() | ||
142 | +@click.argument('command', | ||
143 | + type=click.Choice(['list', 'enable', 'disable'])) | ||
144 | +@pass_ctx | ||
145 | +def event_sources(ctx, command): | ||
146 | + """List, enable, and disable event sources specified in the config file""" | ||
147 | + if command == 'list': | ||
148 | + click.echo('listing event sources') | ||
149 | + event_sources = ctx.list_event_sources() | ||
150 | + for es in event_sources: | ||
151 | + click.echo('arn: {}'.format(es['arn'])) | ||
152 | + click.echo('starting position: {}'.format(es['starting_position'])) | ||
153 | + click.echo('batch size: {}'.format(es['batch_size'])) | ||
154 | + click.echo('enabled: {}'.format(es['enabled'])) | ||
155 | + click.echo('done') | ||
156 | + elif command == 'enable': | ||
157 | + click.echo('enabling event sources') | ||
158 | + ctx.enable_event_sources() | ||
159 | + click.echo('done') | ||
160 | + elif command == 'disable': | ||
161 | + click.echo('enabling event sources') | ||
162 | + ctx.disable_event_sources() | ||
163 | + click.echo('done') |
samples/python/.gitignore
0 → 100644
samples/python/README.md
0 → 100644
1 | +A Simple Python Example | ||
2 | +======================= | ||
3 | + | ||
4 | +In this Python example, we will build a Lambda function that can be hooked up | ||
5 | +to methods in API Gateway to provide a simple CRUD REST API that persists JSON | ||
6 | +objects in DynamoDB. | ||
7 | + | ||
8 | +To implement this, we will create a single Lambda function that will be | ||
9 | +associated with the GET, POST, PUT, and DELETE HTTP methods of a single API | ||
10 | +Gateway resource. We will show the API Gateway connections later. For now, we | ||
11 | +will focus on our Lambda function. | ||
12 | + | ||
13 | + | ||
14 | + | ||
15 | +Installing Dependencies | ||
16 | +----------------------- | ||
17 | + | ||
18 | +Put all dependencies in the `requirements.txt` file in this directory and then | ||
19 | +run the following command to install them in this directory prior to uploading | ||
20 | +the code. | ||
21 | + | ||
22 | + $ pip install -r requirements.txt -t /full/path/to/this/code | ||
23 | + | ||
24 | +This will install all of the dependencies inside the code directory so they can | ||
25 | +be bundled with your own code and deployed to Lambda. | ||
26 | + | ||
27 | +The ``setup.cfg`` file in this directory is required if you are running on | ||
28 | +MacOS and are using brew. It may not be needed on other platforms. | ||
29 | + |
samples/python/_src/README.md
0 → 100644
1 | +The Code Is Here! | ||
2 | +================= | ||
3 | + | ||
4 | +At the moment, the contents of this directory are created by hand but when | ||
5 | +LambdaPI is complete, the basic framework would be created for you. You would | ||
6 | +have a Python source file that works but doesn't actually do anything. And the | ||
7 | +config.json file here would be created on the fly at deployment time. The | ||
8 | +correct resource names and other variables would be written into the config | ||
9 | +file and then then config file would get bundled up with the code. You can | ||
10 | +then load the config file at run time in the Lambda Python code so you don't | ||
11 | +have to hardcode resource names in your code. | ||
12 | + | ||
13 | + | ||
14 | +Installing Dependencies | ||
15 | +----------------------- | ||
16 | + | ||
17 | +Put all dependencies in the `requirements.txt` file in this directory and then | ||
18 | +run the following command to install them in this directory prior to uploading | ||
19 | +the code. | ||
20 | + | ||
21 | + $ pip install -r requirements.txt -t /full/path/to/this/code | ||
22 | + | ||
23 | +This will install all of the dependencies inside the code directory so they can | ||
24 | +be bundled with your own code and deployed to Lambda. | ||
25 | + | ||
26 | +The ``setup.cfg`` file in this directory is required if you are running on | ||
27 | +MacOS and are using brew. It may not be needed on other platforms. | ||
28 | + |
samples/python/_src/dev_config.json
0 → 100644
samples/python/_src/prod_config.json
0 → 100644
samples/python/_src/requirements.txt
0 → 100644
samples/python/_src/simple.py
0 → 100644
1 | +import logging | ||
2 | +import json | ||
3 | +import uuid | ||
4 | + | ||
5 | +import boto3 | ||
6 | + | ||
7 | +LOG = logging.getLogger() | ||
8 | +LOG.setLevel(logging.INFO) | ||
9 | + | ||
10 | +# The kappa deploy command will make sure that the right config file | ||
11 | +# for this environment is available in the local directory. | ||
12 | +config = json.load(open('config.json')) | ||
13 | + | ||
14 | +session = boto3.Session(region_name=config['region_name']) | ||
15 | +ddb_client = session.resource('dynamodb') | ||
16 | +table = ddb_client.Table(config['sample_table']) | ||
17 | + | ||
18 | + | ||
19 | +def foobar(): | ||
20 | + return 42 | ||
21 | + | ||
22 | + | ||
23 | +def _get(event, context): | ||
24 | + customer_id = event.get('id') | ||
25 | + if customer_id is None: | ||
26 | + raise Exception('No id provided for GET operation') | ||
27 | + response = table.get_item(Key={'id': customer_id}) | ||
28 | + item = response.get('Item') | ||
29 | + if item is None: | ||
30 | + raise Exception('id: {} not found'.format(customer_id)) | ||
31 | + return response['Item'] | ||
32 | + | ||
33 | + | ||
34 | +def _post(event, context): | ||
35 | + item = event['json_body'] | ||
36 | + if item is None: | ||
37 | + raise Exception('No json_body found in event') | ||
38 | + item['id'] = str(uuid.uuid4()) | ||
39 | + table.put_item(Item=item) | ||
40 | + return item | ||
41 | + | ||
42 | + | ||
43 | +def _put(event, context): | ||
44 | + data = _get(event, context) | ||
45 | + id_ = data.get('id') | ||
46 | + data.update(event['json_body']) | ||
47 | + # don't allow the id to be changed | ||
48 | + data['id'] = id_ | ||
49 | + table.put_item(Item=data) | ||
50 | + return data | ||
51 | + | ||
52 | + | ||
53 | +def handler(event, context): | ||
54 | + LOG.info(event) | ||
55 | + http_method = event.get('http_method') | ||
56 | + if not http_method: | ||
57 | + return 'NoHttpMethodSupplied' | ||
58 | + if http_method == 'GET': | ||
59 | + return _get(event, context) | ||
60 | + elif http_method == 'POST': | ||
61 | + return _post(event, context) | ||
62 | + elif http_method == 'PUT': | ||
63 | + return _put(event, context) | ||
64 | + elif http_method == 'DELETE': | ||
65 | + return _put(event, context) | ||
66 | + else: | ||
67 | + raise Exception('UnsupportedMethod: {}'.format(http_method)) |
samples/python/_tests/test_get.json
0 → 100644
samples/python/_tests/test_post.json
0 → 100644
samples/python/_tests/unit/__init__.py
0 → 100644
File mode changed
samples/python/_tests/unit/test_simple.py
0 → 100644
samples/python/kappa.yml.sample
0 → 100644
1 | +--- | ||
2 | +name: kappa-python-sample | ||
3 | +environments: | ||
4 | + dev: | ||
5 | + profile: <your dev profile> | ||
6 | + region: <your dev region e.g. us-west-2> | ||
7 | + policy: | ||
8 | + resources: | ||
9 | + - arn: arn:aws:dynamodb:us-west-2:123456789012:table/kappa-python-sample | ||
10 | + | ||
11 | + actions: | ||
12 | + - "*" | ||
13 | + - arn: arn:aws:logs:*:*:* | ||
14 | + actions: | ||
15 | + - "*" | ||
16 | + prod: | ||
17 | + profile: <your prod profile> | ||
18 | + region: <your prod region e.g. us-west-2> | ||
19 | + policy_resources: | ||
20 | + - arn: arn:aws:dynamodb:us-west-2:234567890123:table/kappa-python-sample | ||
21 | + actions: | ||
22 | + - "*" | ||
23 | + - arn: arn:aws:logs:*:*:* | ||
24 | + actions: | ||
25 | + - "*" | ||
26 | +lambda: | ||
27 | + description: A simple Python sample | ||
28 | + handler: simple.handler | ||
29 | + runtime: python2.7 | ||
30 | + memory_size: 256 | ||
31 | + timeout: 3 | ||
32 | + | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
samples/simple/.gitignore
0 → 100644
samples/simple/_src/README.md
0 → 100644
1 | +The Code Is Here! | ||
2 | +================= | ||
3 | + | ||
4 | +Installing Dependencies | ||
5 | +----------------------- | ||
6 | + | ||
7 | +Put all dependencies in the `requirements.txt` file in this directory and then | ||
8 | +run the following command to install them in this directory prior to uploading | ||
9 | +the code. | ||
10 | + | ||
11 | + $ pip install -r requirements.txt -t /full/path/to/this/code | ||
12 | + | ||
13 | +This will install all of the dependencies inside the code directory so they can | ||
14 | +be bundled with your own code and deployed to Lambda. | ||
15 | + | ||
16 | +The ``setup.cfg`` file in this directory is required if you are running on | ||
17 | +MacOS and are using brew. It may not be needed on other platforms. | ||
18 | + |
samples/simple/_src/requirements.txt
0 → 100644
File mode changed
samples/simple/_src/setup.cfg
0 → 100644
samples/simple/_src/simple.py
0 → 100644
samples/simple/_tests/test_one.json
0 → 100644
samples/simple/kappa.yml.sample
0 → 100644
1 | +--- | ||
2 | +name: kappa-simple | ||
3 | +environments: | ||
4 | + dev: | ||
5 | + profile: <your profile here> | ||
6 | + region: <your region here> | ||
7 | + policy: | ||
8 | + resources: | ||
9 | + - arn: arn:aws:logs:*:*:* | ||
10 | + actions: | ||
11 | + - "*" | ||
12 | + prod: | ||
13 | + profile: <your profile here> | ||
14 | + region: <your region here> | ||
15 | + policy: | ||
16 | + resources: | ||
17 | + - arn: arn:aws:logs:*:*:* | ||
18 | + actions: | ||
19 | + - "*" | ||
20 | +lambda: | ||
21 | + description: A very simple Kappa example | ||
22 | + handler: simple.handler | ||
23 | + runtime: python2.7 | ||
24 | + memory_size: 128 | ||
25 | + timeout: 3 | ||
26 | + | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
... | @@ -5,8 +5,9 @@ from setuptools import setup, find_packages | ... | @@ -5,8 +5,9 @@ from setuptools import setup, find_packages |
5 | import os | 5 | import os |
6 | 6 | ||
7 | requires = [ | 7 | requires = [ |
8 | - 'boto3==1.1.1', | 8 | + 'boto3>=1.2.2', |
9 | - 'click==4.0', | 9 | + 'placebo>=0.4.1', |
10 | + 'click>=5.0', | ||
10 | 'PyYAML>=3.11' | 11 | 'PyYAML>=3.11' |
11 | ] | 12 | ] |
12 | 13 | ||
... | @@ -22,7 +23,10 @@ setup( | ... | @@ -22,7 +23,10 @@ setup( |
22 | packages=find_packages(exclude=['tests*']), | 23 | packages=find_packages(exclude=['tests*']), |
23 | package_data={'kappa': ['_version']}, | 24 | package_data={'kappa': ['_version']}, |
24 | package_dir={'kappa': 'kappa'}, | 25 | package_dir={'kappa': 'kappa'}, |
25 | - scripts=['bin/kappa'], | 26 | + entry_points=""" |
27 | + [console_scripts] | ||
28 | + kappa=kappa.scripts.cli:cli | ||
29 | + """, | ||
26 | install_requires=requires, | 30 | install_requires=requires, |
27 | license=open("LICENSE").read(), | 31 | license=open("LICENSE").read(), |
28 | classifiers=( | 32 | classifiers=( |
... | @@ -32,10 +36,10 @@ setup( | ... | @@ -32,10 +36,10 @@ setup( |
32 | 'Natural Language :: English', | 36 | 'Natural Language :: English', |
33 | 'License :: OSI Approved :: Apache Software License', | 37 | 'License :: OSI Approved :: Apache Software License', |
34 | 'Programming Language :: Python', | 38 | 'Programming Language :: Python', |
35 | - 'Programming Language :: Python :: 2.6', | ||
36 | 'Programming Language :: Python :: 2.7', | 39 | 'Programming Language :: Python :: 2.7', |
37 | 'Programming Language :: Python :: 3', | 40 | 'Programming Language :: Python :: 3', |
38 | 'Programming Language :: Python :: 3.3', | 41 | 'Programming Language :: Python :: 3.3', |
39 | - 'Programming Language :: Python :: 3.4' | 42 | + 'Programming Language :: Python :: 3.4', |
43 | + 'Programming Language :: Python :: 3.5' | ||
40 | ), | 44 | ), |
41 | ) | 45 | ) | ... | ... |
tests/unit/cfg/aws_credentials
0 → 100644
tests/unit/data/BazPolicy.json
deleted
100644 → 0
1 | -{ | ||
2 | - "Statement":[ | ||
3 | - {"Condition": | ||
4 | - {"ArnLike":{"AWS:SourceArn":"arn:aws:sns:us-east-1:123456789012:lambda_topic"}}, | ||
5 | - "Resource":"arn:aws:lambda:us-east-1:123456789023:function:messageStore", | ||
6 | - "Action":"lambda:invokeFunction", | ||
7 | - "Principal":{"Service":"sns.amazonaws.com"}, | ||
8 | - "Sid":"sns invoke","Effect":"Allow" | ||
9 | - }], | ||
10 | - "Id":"default", | ||
11 | - "Version":"2012-10-17" | ||
12 | -} |
tests/unit/foobar/.kappa/cache
0 → 100644
tests/unit/foobar/_src/simple.py
0 → 100644
tests/unit/foobar/kappa-simple.zip
0 → 100644
No preview for this file type
tests/unit/foobar/kappa.yml
0 → 100644
1 | +--- | ||
2 | +name: kappa-simple | ||
3 | +environments: | ||
4 | + dev: | ||
5 | + profile: foobar | ||
6 | + region: us-west-2 | ||
7 | + policy: | ||
8 | + resources: | ||
9 | + - arn: arn:aws:logs:*:*:* | ||
10 | + actions: | ||
11 | + - "*" | ||
12 | +lambda: | ||
13 | + description: Foo the Bar | ||
14 | + handler: simple.handler | ||
15 | + runtime: python2.7 | ||
16 | + memory_size: 256 | ||
17 | + timeout: 3 |
tests/unit/mock_aws.py
deleted
100644 → 0
1 | -import inspect | ||
2 | - | ||
3 | -import mock | ||
4 | - | ||
5 | -import tests.unit.responses as responses | ||
6 | - | ||
7 | - | ||
8 | -class MockAWS(object): | ||
9 | - | ||
10 | - def __init__(self, profile=None, region=None): | ||
11 | - self.response_map = {} | ||
12 | - for name, value in inspect.getmembers(responses): | ||
13 | - if name.startswith('__'): | ||
14 | - continue | ||
15 | - if '_' in name: | ||
16 | - service_name, request_name = name.split('_', 1) | ||
17 | - if service_name not in self.response_map: | ||
18 | - self.response_map[service_name] = {} | ||
19 | - self.response_map[service_name][request_name] = value | ||
20 | - | ||
21 | - def create_client(self, client_name): | ||
22 | - client = None | ||
23 | - if client_name in self.response_map: | ||
24 | - client = mock.Mock() | ||
25 | - for request in self.response_map[client_name]: | ||
26 | - response = self.response_map[client_name][request] | ||
27 | - setattr(client, request, mock.Mock(side_effect=response)) | ||
28 | - return client | ||
29 | - | ||
30 | - | ||
31 | -def get_aws(context): | ||
32 | - return MockAWS() |
tests/unit/responses.py
deleted
100644 → 0
This diff is collapsed. Click to expand it.
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "Policy": { | ||
5 | + "PolicyName": "kappa-simple_dev", | ||
6 | + "CreateDate": { | ||
7 | + "hour": 4, | ||
8 | + "__class__": "datetime", | ||
9 | + "month": 12, | ||
10 | + "second": 46, | ||
11 | + "microsecond": 302000, | ||
12 | + "year": 2015, | ||
13 | + "day": 14, | ||
14 | + "minute": 13 | ||
15 | + }, | ||
16 | + "AttachmentCount": 0, | ||
17 | + "IsAttachable": true, | ||
18 | + "PolicyId": "ANPAJ6USPUIU5QKQ7DWMG", | ||
19 | + "DefaultVersionId": "v1", | ||
20 | + "Path": "/kappa/", | ||
21 | + "Arn": "arn:aws:iam::123456789012:policy/kappa/kappa-simple_dev", | ||
22 | + "UpdateDate": { | ||
23 | + "hour": 4, | ||
24 | + "__class__": "datetime", | ||
25 | + "month": 12, | ||
26 | + "second": 46, | ||
27 | + "microsecond": 302000, | ||
28 | + "year": 2015, | ||
29 | + "day": 14, | ||
30 | + "minute": 13 | ||
31 | + } | ||
32 | + }, | ||
33 | + "ResponseMetadata": { | ||
34 | + "HTTPStatusCode": 200, | ||
35 | + "RequestId": "11cdf3d8-a219-11e5-a392-d5ea3c3fc695" | ||
36 | + } | ||
37 | + } | ||
38 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "Role": { | ||
5 | + "AssumeRolePolicyDocument": "%7B%0A%20%20%20%20%22Version%22%20%3A%20%222012-10-17%22%2C%0A%20%20%20%20%22Statement%22%3A%20%5B%20%7B%0A%20%20%20%20%20%20%20%20%22Effect%22%3A%20%22Allow%22%2C%0A%20%20%20%20%20%20%20%20%22Principal%22%3A%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%22Service%22%3A%20%5B%20%22lambda.amazonaws.com%22%20%5D%0A%20%20%20%20%20%20%20%20%7D%2C%0A%20%20%20%20%20%20%20%20%22Action%22%3A%20%5B%20%22sts%3AAssumeRole%22%20%5D%0A%20%20%20%20%7D%20%5D%0A%7D", | ||
6 | + "RoleId": "AROAICWPJDQLUTEOHRQZO", | ||
7 | + "CreateDate": { | ||
8 | + "hour": 4, | ||
9 | + "__class__": "datetime", | ||
10 | + "month": 12, | ||
11 | + "second": 46, | ||
12 | + "microsecond": 988000, | ||
13 | + "year": 2015, | ||
14 | + "day": 14, | ||
15 | + "minute": 13 | ||
16 | + }, | ||
17 | + "RoleName": "kappa-simple_dev", | ||
18 | + "Path": "/kappa/", | ||
19 | + "Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev" | ||
20 | + }, | ||
21 | + "ResponseMetadata": { | ||
22 | + "HTTPStatusCode": 200, | ||
23 | + "RequestId": "123d5777-a219-11e5-8386-d3391e1d709e" | ||
24 | + } | ||
25 | + } | ||
26 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "Role": { | ||
5 | + "AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D", | ||
6 | + "RoleId": "AROAICWPJDQLUTEOHRQZO", | ||
7 | + "CreateDate": { | ||
8 | + "hour": 4, | ||
9 | + "__class__": "datetime", | ||
10 | + "month": 12, | ||
11 | + "second": 46, | ||
12 | + "microsecond": 0, | ||
13 | + "year": 2015, | ||
14 | + "day": 14, | ||
15 | + "minute": 13 | ||
16 | + }, | ||
17 | + "RoleName": "kappa-simple_dev", | ||
18 | + "Path": "/kappa/", | ||
19 | + "Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev" | ||
20 | + }, | ||
21 | + "ResponseMetadata": { | ||
22 | + "HTTPStatusCode": 200, | ||
23 | + "RequestId": "12dca49a-a219-11e5-9912-d70327f9be2c" | ||
24 | + } | ||
25 | + } | ||
26 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "Role": { | ||
5 | + "AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D", | ||
6 | + "RoleId": "AROAICWPJDQLUTEOHRQZO", | ||
7 | + "CreateDate": { | ||
8 | + "hour": 4, | ||
9 | + "__class__": "datetime", | ||
10 | + "month": 12, | ||
11 | + "second": 46, | ||
12 | + "microsecond": 0, | ||
13 | + "year": 2015, | ||
14 | + "day": 14, | ||
15 | + "minute": 13 | ||
16 | + }, | ||
17 | + "RoleName": "kappa-simple_dev", | ||
18 | + "Path": "/kappa/", | ||
19 | + "Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev" | ||
20 | + }, | ||
21 | + "ResponseMetadata": { | ||
22 | + "HTTPStatusCode": 200, | ||
23 | + "RequestId": "1bd39022-a219-11e5-bb1e-6b18bfdcba09" | ||
24 | + } | ||
25 | + } | ||
26 | +} |
This diff is collapsed. Click to expand it.
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 200, | ||
6 | + "RequestId": "1264405a-a219-11e5-ad54-c769aa17a0a1" | ||
7 | + }, | ||
8 | + "IsTruncated": false, | ||
9 | + "Policies": [ | ||
10 | + { | ||
11 | + "PolicyName": "kappa-simple_dev", | ||
12 | + "CreateDate": { | ||
13 | + "hour": 4, | ||
14 | + "__class__": "datetime", | ||
15 | + "month": 12, | ||
16 | + "second": 46, | ||
17 | + "microsecond": 0, | ||
18 | + "year": 2015, | ||
19 | + "day": 14, | ||
20 | + "minute": 13 | ||
21 | + }, | ||
22 | + "AttachmentCount": 0, | ||
23 | + "IsAttachable": true, | ||
24 | + "PolicyId": "ANPAJ6USPUIU5QKQ7DWMG", | ||
25 | + "DefaultVersionId": "v1", | ||
26 | + "Path": "/kappa/", | ||
27 | + "Arn": "arn:aws:iam::123456789012:policy/kappa/kappa-simple_dev", | ||
28 | + "UpdateDate": { | ||
29 | + "hour": 4, | ||
30 | + "__class__": "datetime", | ||
31 | + "month": 12, | ||
32 | + "second": 46, | ||
33 | + "microsecond": 0, | ||
34 | + "year": 2015, | ||
35 | + "day": 14, | ||
36 | + "minute": 13 | ||
37 | + } | ||
38 | + }, | ||
39 | + { | ||
40 | + "PolicyName": "FooBar15", | ||
41 | + "CreateDate": { | ||
42 | + "hour": 19, | ||
43 | + "__class__": "datetime", | ||
44 | + "month": 12, | ||
45 | + "second": 15, | ||
46 | + "microsecond": 0, | ||
47 | + "year": 2015, | ||
48 | + "day": 10, | ||
49 | + "minute": 22 | ||
50 | + }, | ||
51 | + "AttachmentCount": 1, | ||
52 | + "IsAttachable": true, | ||
53 | + "PolicyId": "ANPAJ3MM445EFVC6OWPIO", | ||
54 | + "DefaultVersionId": "v1", | ||
55 | + "Path": "/kappa/", | ||
56 | + "Arn": "arn:aws:iam::123456789012:policy/kappa/FooBar15", | ||
57 | + "UpdateDate": { | ||
58 | + "hour": 19, | ||
59 | + "__class__": "datetime", | ||
60 | + "month": 12, | ||
61 | + "second": 15, | ||
62 | + "microsecond": 0, | ||
63 | + "year": 2015, | ||
64 | + "day": 10, | ||
65 | + "minute": 22 | ||
66 | + } | ||
67 | + } | ||
68 | + ] | ||
69 | + } | ||
70 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 200, | ||
6 | + "RequestId": "1b40516e-a219-11e5-bb1e-6b18bfdcba09" | ||
7 | + }, | ||
8 | + "IsTruncated": false, | ||
9 | + "Policies": [ | ||
10 | + { | ||
11 | + "PolicyName": "kappa-simple_dev", | ||
12 | + "CreateDate": { | ||
13 | + "hour": 4, | ||
14 | + "__class__": "datetime", | ||
15 | + "month": 12, | ||
16 | + "second": 46, | ||
17 | + "microsecond": 0, | ||
18 | + "year": 2015, | ||
19 | + "day": 14, | ||
20 | + "minute": 13 | ||
21 | + }, | ||
22 | + "AttachmentCount": 1, | ||
23 | + "IsAttachable": true, | ||
24 | + "PolicyId": "ANPAJ6USPUIU5QKQ7DWMG", | ||
25 | + "DefaultVersionId": "v1", | ||
26 | + "Path": "/kappa/", | ||
27 | + "Arn": "arn:aws:iam::123456789012:policy/kappa/kappa-simple_dev", | ||
28 | + "UpdateDate": { | ||
29 | + "hour": 4, | ||
30 | + "__class__": "datetime", | ||
31 | + "month": 12, | ||
32 | + "second": 46, | ||
33 | + "microsecond": 0, | ||
34 | + "year": 2015, | ||
35 | + "day": 14, | ||
36 | + "minute": 13 | ||
37 | + } | ||
38 | + }, | ||
39 | + { | ||
40 | + "PolicyName": "FooBar15", | ||
41 | + "CreateDate": { | ||
42 | + "hour": 19, | ||
43 | + "__class__": "datetime", | ||
44 | + "month": 12, | ||
45 | + "second": 15, | ||
46 | + "microsecond": 0, | ||
47 | + "year": 2015, | ||
48 | + "day": 10, | ||
49 | + "minute": 22 | ||
50 | + }, | ||
51 | + "AttachmentCount": 1, | ||
52 | + "IsAttachable": true, | ||
53 | + "PolicyId": "ANPAJ3MM445EFVC6OWPIO", | ||
54 | + "DefaultVersionId": "v1", | ||
55 | + "Path": "/kappa/", | ||
56 | + "Arn": "arn:aws:iam::123456789012:policy/kappa/FooBar15", | ||
57 | + "UpdateDate": { | ||
58 | + "hour": 19, | ||
59 | + "__class__": "datetime", | ||
60 | + "month": 12, | ||
61 | + "second": 15, | ||
62 | + "microsecond": 0, | ||
63 | + "year": 2015, | ||
64 | + "day": 10, | ||
65 | + "minute": 22 | ||
66 | + } | ||
67 | + } | ||
68 | + ] | ||
69 | + } | ||
70 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 200, | ||
6 | + "RequestId": "120be6dd-a219-11e5-ad54-c769aa17a0a1" | ||
7 | + }, | ||
8 | + "IsTruncated": false, | ||
9 | + "Roles": [ | ||
10 | + { | ||
11 | + "AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D", | ||
12 | + "RoleId": "AROAJC6I44KNC2N4C6DUO", | ||
13 | + "CreateDate": { | ||
14 | + "hour": 13, | ||
15 | + "__class__": "datetime", | ||
16 | + "month": 8, | ||
17 | + "second": 29, | ||
18 | + "microsecond": 0, | ||
19 | + "year": 2015, | ||
20 | + "day": 12, | ||
21 | + "minute": 10 | ||
22 | + }, | ||
23 | + "RoleName": "FooBar1", | ||
24 | + "Path": "/kappa/", | ||
25 | + "Arn": "arn:aws:iam::123456789012:role/kappa/FooBar1" | ||
26 | + }, | ||
27 | + { | ||
28 | + "AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22AWS%22%3A%22arn%3Aaws%3Aiam%3A%3A433502988969%3Aroot%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D", | ||
29 | + "RoleId": "AROAIPICAZWCWSIUY6WBC", | ||
30 | + "CreateDate": { | ||
31 | + "hour": 6, | ||
32 | + "__class__": "datetime", | ||
33 | + "month": 5, | ||
34 | + "second": 3, | ||
35 | + "microsecond": 0, | ||
36 | + "year": 2015, | ||
37 | + "day": 5, | ||
38 | + "minute": 31 | ||
39 | + }, | ||
40 | + "RoleName": "FooBar2", | ||
41 | + "Path": "/", | ||
42 | + "Arn": "arn:aws:iam::123456789012:role/FooBar2" | ||
43 | + } | ||
44 | + ] | ||
45 | + } | ||
46 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 200, | ||
6 | + "RequestId": "1b6a1fab-a219-11e5-bb1e-6b18bfdcba09" | ||
7 | + }, | ||
8 | + "IsTruncated": false, | ||
9 | + "Roles": [ | ||
10 | + { | ||
11 | + "AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22Service%22%3A%22lambda.amazonaws.com%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%7D%5D%7D", | ||
12 | + "RoleId": "AROAICWPJDQLUTEOHRQZO", | ||
13 | + "CreateDate": { | ||
14 | + "hour": 4, | ||
15 | + "__class__": "datetime", | ||
16 | + "month": 12, | ||
17 | + "second": 46, | ||
18 | + "microsecond": 0, | ||
19 | + "year": 2015, | ||
20 | + "day": 14, | ||
21 | + "minute": 13 | ||
22 | + }, | ||
23 | + "RoleName": "kappa-simple_dev", | ||
24 | + "Path": "/kappa/", | ||
25 | + "Arn": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev" | ||
26 | + }, | ||
27 | + { | ||
28 | + "AssumeRolePolicyDocument": "%7B%22Version%22%3A%222012-10-17%22%2C%22Statement%22%3A%5B%7B%22Sid%22%3A%22%22%2C%22Effect%22%3A%22Allow%22%2C%22Principal%22%3A%7B%22AWS%22%3A%22arn%3Aaws%3Aiam%3A%3A123456789012%3Aroot%22%7D%2C%22Action%22%3A%22sts%3AAssumeRole%22%2C%22Condition%22%3A%7B%22StringEquals%22%3A%7B%22sts%3AExternalId%22%3A%22c196gvft3%22%7D%7D%7D%5D%7D", | ||
29 | + "RoleId": "AROAJGQVUYMCJZYCM3MR4", | ||
30 | + "CreateDate": { | ||
31 | + "hour": 15, | ||
32 | + "__class__": "datetime", | ||
33 | + "month": 6, | ||
34 | + "second": 2, | ||
35 | + "microsecond": 0, | ||
36 | + "year": 2015, | ||
37 | + "day": 12, | ||
38 | + "minute": 53 | ||
39 | + }, | ||
40 | + "RoleName": "kate-test-policy-role", | ||
41 | + "Path": "/", | ||
42 | + "Arn": "arn:aws:iam::123456789012:role/kate-test-policy-role" | ||
43 | + } | ||
44 | + ] | ||
45 | + } | ||
46 | +} |
1 | +{ | ||
2 | + "status_code": 201, | ||
3 | + "data": { | ||
4 | + "AliasArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple:dev", | ||
5 | + "FunctionVersion": "12", | ||
6 | + "Name": "dev", | ||
7 | + "ResponseMetadata": { | ||
8 | + "HTTPStatusCode": 201, | ||
9 | + "RequestId": "1872d8ff-a219-11e5-9579-ab6c3f6de03e" | ||
10 | + }, | ||
11 | + "Description": "For stage dev" | ||
12 | + } | ||
13 | +} |
1 | +{ | ||
2 | + "status_code": 400, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 400, | ||
6 | + "RequestId": "12ed468e-a219-11e5-89fa-9b1d3e60e617" | ||
7 | + }, | ||
8 | + "Error": { | ||
9 | + "Message": "The role defined for the task cannot be assumed by Lambda.", | ||
10 | + "Code": "InvalidParameterValueException" | ||
11 | + } | ||
12 | + } | ||
13 | +} | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
1 | +{ | ||
2 | + "status_code": 400, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 400, | ||
6 | + "RequestId": "14375279-a219-11e5-b9da-196ca0eccf24" | ||
7 | + }, | ||
8 | + "Error": { | ||
9 | + "Message": "The role defined for the task cannot be assumed by Lambda.", | ||
10 | + "Code": "InvalidParameterValueException" | ||
11 | + } | ||
12 | + } | ||
13 | +} | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
1 | +{ | ||
2 | + "status_code": 400, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 400, | ||
6 | + "RequestId": "158815a1-a219-11e5-b354-111009c28f60" | ||
7 | + }, | ||
8 | + "Error": { | ||
9 | + "Message": "The role defined for the task cannot be assumed by Lambda.", | ||
10 | + "Code": "InvalidParameterValueException" | ||
11 | + } | ||
12 | + } | ||
13 | +} | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
1 | +{ | ||
2 | + "status_code": 400, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 400, | ||
6 | + "RequestId": "16d88a59-a219-11e5-abfc-a3c6c8e4d88f" | ||
7 | + }, | ||
8 | + "Error": { | ||
9 | + "Message": "The role defined for the task cannot be assumed by Lambda.", | ||
10 | + "Code": "InvalidParameterValueException" | ||
11 | + } | ||
12 | + } | ||
13 | +} | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
1 | +{ | ||
2 | + "status_code": 201, | ||
3 | + "data": { | ||
4 | + "CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=", | ||
5 | + "FunctionName": "kappa-simple", | ||
6 | + "ResponseMetadata": { | ||
7 | + "HTTPStatusCode": 201, | ||
8 | + "RequestId": "1820256f-a219-11e5-acaa-ebe01320cf02" | ||
9 | + }, | ||
10 | + "CodeSize": 948, | ||
11 | + "MemorySize": 256, | ||
12 | + "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple", | ||
13 | + "Version": "12", | ||
14 | + "Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev", | ||
15 | + "Timeout": 3, | ||
16 | + "LastModified": "2015-12-14T04:13:56.737+0000", | ||
17 | + "Handler": "simple.handler", | ||
18 | + "Runtime": "python2.7", | ||
19 | + "Description": "A very simple Kappa example" | ||
20 | + } | ||
21 | +} |
1 | +{ | ||
2 | + "status_code": 404, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 404, | ||
6 | + "RequestId": "12caa276-a219-11e5-bc80-bb0600635952" | ||
7 | + }, | ||
8 | + "Error": { | ||
9 | + "Message": "Function not found: arn:aws:lambda:us-west-2:860421987956:function:kappa-simple", | ||
10 | + "Code": "ResourceNotFoundException" | ||
11 | + } | ||
12 | + } | ||
13 | +} | ||
... | \ No newline at end of file | ... | \ No newline at end of file |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "Code": { | ||
5 | + "RepositoryType": "S3", | ||
6 | + "Location": "https://awslambda-us-west-2-tasks.s3-us-west-2.amazonaws.com/snapshots/123456789012/kappa-simple-99dba060-c458-48c6-ab7b-501063603e69?x-amz-security-token=AQoDYXdzECQa4AOvxYmkiVqa3ost0drsHs84f3tyUBYSVQUm%2BVvFZgAqx9JDt55l4N4T%2FwH8302pH0ICUZfCRRfc%2FuWtukJsT33XIsG6Xw0Br8w00y07RRpZYQLiJqTXi0i2EFZ6LMIRsGBgKV%2BdufXXu7P9yfzqBiFUrfUD6fYeRNLdv34aXUDto0G0gTj3ZDv9gqO9q7YEXbeu1NI62cIfuEGph2ptFj5V1E%2BijK0h9XEW0mkfuomQt6oeii%2FkkNNm5tEyUlpeX17z1sbX3NYoqJrap0QdoqXkak%2BFPvJQG7hm7eJ40b2ymve9L3gvIOiKNzmQrzay77uEkYDNLxK89QMlYRtRG6vTHppdZzTVIooTFVdA6NSSvYHnjryStLA3VUnDG%2FsL9xAiHH8l4kzq%2ByvatF%2Fg8wTNXOdFxt0VMVkJVbwG%2FUex7juyEcRAJUGNaHBZNLPJVUL%2BfAQljCwJAnjXxD%2FpjEtyLi9YbdfLGywkBKccoKh7AmjJXwzT8TusWNKmmW0XJL%2Fn81NE84Ni9iVB8JHxRbwaJXT2ou0ytwn%2BIIlRcmwXSIwA3xm%2FXynUTfOuXZ3UMGuBlHtt45uKGJvvp5d6RQicK5q5LXFQgGxj5gUqgty0jPhPE%2BN%2BF8WUwSk3eNwPiwMgwOS4swU%3D&AWSAccessKeyId=ASIAIHZZJVPM3RQS3QOQ&Expires=1450067042&Signature=QeC65kDb6N4CNRGn9IiQNBSpl4g%3D" | ||
7 | + }, | ||
8 | + "Configuration": { | ||
9 | + "Version": "$LATEST", | ||
10 | + "CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=", | ||
11 | + "FunctionName": "kappa-simple", | ||
12 | + "MemorySize": 256, | ||
13 | + "CodeSize": 948, | ||
14 | + "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple", | ||
15 | + "Handler": "simple.handler", | ||
16 | + "Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev", | ||
17 | + "Timeout": 3, | ||
18 | + "LastModified": "2015-12-14T04:13:56.737+0000", | ||
19 | + "Runtime": "python2.7", | ||
20 | + "Description": "A very simple Kappa example" | ||
21 | + }, | ||
22 | + "ResponseMetadata": { | ||
23 | + "HTTPStatusCode": 200, | ||
24 | + "RequestId": "1bc69855-a219-11e5-990d-c158fa575e6a" | ||
25 | + } | ||
26 | + } | ||
27 | +} |
1 | +{ | ||
2 | + "status_code": 200, | ||
3 | + "data": { | ||
4 | + "ResponseMetadata": { | ||
5 | + "HTTPStatusCode": 200, | ||
6 | + "RequestId": "1860ff11-a219-11e5-b9da-196ca0eccf24" | ||
7 | + }, | ||
8 | + "Versions": [ | ||
9 | + { | ||
10 | + "Version": "$LATEST", | ||
11 | + "CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=", | ||
12 | + "FunctionName": "kappa-simple", | ||
13 | + "MemorySize": 256, | ||
14 | + "CodeSize": 948, | ||
15 | + "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple:$LATEST", | ||
16 | + "Handler": "simple.handler", | ||
17 | + "Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev", | ||
18 | + "Timeout": 3, | ||
19 | + "LastModified": "2015-12-14T04:13:56.737+0000", | ||
20 | + "Runtime": "python2.7", | ||
21 | + "Description": "A very simple Kappa example" | ||
22 | + }, | ||
23 | + { | ||
24 | + "Version": "12", | ||
25 | + "CodeSha256": "JklpzNjuO6TLDiNe6nVYWeo1Imq6bF5uaMt2L0bqp5Y=", | ||
26 | + "FunctionName": "kappa-simple", | ||
27 | + "MemorySize": 256, | ||
28 | + "CodeSize": 948, | ||
29 | + "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:kappa-simple:12", | ||
30 | + "Handler": "simple.handler", | ||
31 | + "Role": "arn:aws:iam::123456789012:role/kappa/kappa-simple_dev", | ||
32 | + "Timeout": 3, | ||
33 | + "LastModified": "2015-12-14T04:13:56.737+0000", | ||
34 | + "Runtime": "python2.7", | ||
35 | + "Description": "A very simple Kappa example" | ||
36 | + } | ||
37 | + ] | ||
38 | + } | ||
39 | +} |
tests/unit/test_deploy.py
0 → 100644
1 | +# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/ | ||
2 | +# | ||
3 | +# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
4 | +# may not use this file except in compliance with the License. A copy of | ||
5 | +# the License is located at | ||
6 | +# | ||
7 | +# http://aws.amazon.com/apache2.0/ | ||
8 | +# | ||
9 | +# or in the "license" file accompanying this file. This file is | ||
10 | +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
11 | +# ANY KIND, either express or implied. See the License for the specific | ||
12 | +# language governing permissions and limitations under the License. | ||
13 | + | ||
14 | +import unittest | ||
15 | +import os | ||
16 | +import shutil | ||
17 | + | ||
18 | +import mock | ||
19 | +import placebo | ||
20 | + | ||
21 | +import kappa.context | ||
22 | +import kappa.awsclient | ||
23 | + | ||
24 | + | ||
25 | +class TestLog(unittest.TestCase): | ||
26 | + | ||
27 | + def setUp(self): | ||
28 | + self.environ = {} | ||
29 | + self.environ_patch = mock.patch('os.environ', self.environ) | ||
30 | + self.environ_patch.start() | ||
31 | + credential_path = os.path.join(os.path.dirname(__file__), 'cfg', | ||
32 | + 'aws_credentials') | ||
33 | + self.environ['AWS_SHARED_CREDENTIALS_FILE'] = credential_path | ||
34 | + self.prj_path = os.path.join(os.path.dirname(__file__), 'foobar') | ||
35 | + cache_file = os.path.join(self.prj_path, '.kappa') | ||
36 | + if os.path.exists(cache_file): | ||
37 | + shutil.rmtree(cache_file) | ||
38 | + self.data_path = os.path.join(os.path.dirname(__file__), 'responses') | ||
39 | + self.data_path = os.path.join(self.data_path, 'deploy') | ||
40 | + self.session = kappa.awsclient.create_session('foobar', 'us-west-2') | ||
41 | + | ||
42 | + def tearDown(self): | ||
43 | + pass | ||
44 | + | ||
45 | + def test_deploy(self): | ||
46 | + pill = placebo.attach(self.session, self.data_path) | ||
47 | + pill.playback() | ||
48 | + os.chdir(self.prj_path) | ||
49 | + cfg_filepath = os.path.join(self.prj_path, 'kappa.yml') | ||
50 | + cfg_fp = open(cfg_filepath) | ||
51 | + ctx = kappa.context.Context(cfg_fp, 'dev') | ||
52 | + ctx.deploy() | ||
53 | + ctx.deploy() |
tests/unit/test_log.py
deleted
100644 → 0
1 | -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ | ||
2 | -# | ||
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
4 | -# may not use this file except in compliance with the License. A copy of | ||
5 | -# the License is located at | ||
6 | -# | ||
7 | -# http://aws.amazon.com/apache2.0/ | ||
8 | -# | ||
9 | -# or in the "license" file accompanying this file. This file is | ||
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
11 | -# ANY KIND, either express or implied. See the License for the specific | ||
12 | -# language governing permissions and limitations under the License. | ||
13 | - | ||
14 | -import unittest | ||
15 | - | ||
16 | -import mock | ||
17 | - | ||
18 | -from kappa.log import Log | ||
19 | -from tests.unit.mock_aws import get_aws | ||
20 | - | ||
21 | - | ||
22 | -class TestLog(unittest.TestCase): | ||
23 | - | ||
24 | - def setUp(self): | ||
25 | - self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws) | ||
26 | - self.mock_aws = self.aws_patch.start() | ||
27 | - | ||
28 | - def tearDown(self): | ||
29 | - self.aws_patch.stop() | ||
30 | - | ||
31 | - def test_streams(self): | ||
32 | - mock_context = mock.Mock() | ||
33 | - log = Log(mock_context, 'foo/bar') | ||
34 | - streams = log.streams() | ||
35 | - self.assertEqual(len(streams), 6) | ||
36 | - | ||
37 | - def test_tail(self): | ||
38 | - mock_context = mock.Mock() | ||
39 | - log = Log(mock_context, 'foo/bar') | ||
40 | - events = log.tail() | ||
41 | - self.assertEqual(len(events), 6) | ||
42 | - self.assertEqual(events[0]['ingestionTime'], 1420569036909) | ||
43 | - self.assertIn('RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770', | ||
44 | - events[-1]['message']) |
tests/unit/test_policy.py
deleted
100644 → 0
1 | -# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/ | ||
2 | -# | ||
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
4 | -# may not use this file except in compliance with the License. A copy of | ||
5 | -# the License is located at | ||
6 | -# | ||
7 | -# http://aws.amazon.com/apache2.0/ | ||
8 | -# | ||
9 | -# or in the "license" file accompanying this file. This file is | ||
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
11 | -# ANY KIND, either express or implied. See the License for the specific | ||
12 | -# language governing permissions and limitations under the License. | ||
13 | - | ||
14 | -import unittest | ||
15 | -import os | ||
16 | - | ||
17 | -import mock | ||
18 | - | ||
19 | -from kappa.policy import Policy | ||
20 | -from tests.unit.mock_aws import get_aws | ||
21 | - | ||
22 | -Config1 = { | ||
23 | - 'name': 'FooPolicy', | ||
24 | - 'description': 'This is the Foo policy', | ||
25 | - 'document': 'FooPolicy.json'} | ||
26 | - | ||
27 | -Config2 = { | ||
28 | - 'name': 'BazPolicy', | ||
29 | - 'description': 'This is the Baz policy', | ||
30 | - 'document': 'BazPolicy.json'} | ||
31 | - | ||
32 | - | ||
33 | -def path(filename): | ||
34 | - return os.path.join(os.path.dirname(__file__), 'data', filename) | ||
35 | - | ||
36 | - | ||
37 | -class TestPolicy(unittest.TestCase): | ||
38 | - | ||
39 | - def setUp(self): | ||
40 | - self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws) | ||
41 | - self.mock_aws = self.aws_patch.start() | ||
42 | - Config1['document'] = path(Config1['document']) | ||
43 | - Config2['document'] = path(Config2['document']) | ||
44 | - | ||
45 | - def tearDown(self): | ||
46 | - self.aws_patch.stop() | ||
47 | - | ||
48 | - def test_properties(self): | ||
49 | - mock_context = mock.Mock() | ||
50 | - policy = Policy(mock_context, Config1) | ||
51 | - self.assertEqual(policy.name, Config1['name']) | ||
52 | - self.assertEqual(policy.document, Config1['document']) | ||
53 | - self.assertEqual(policy.description, Config1['description']) | ||
54 | - | ||
55 | - def test_exists(self): | ||
56 | - mock_context = mock.Mock() | ||
57 | - policy = Policy(mock_context, Config1) | ||
58 | - self.assertTrue(policy.exists()) | ||
59 | - | ||
60 | - def test_not_exists(self): | ||
61 | - mock_context = mock.Mock() | ||
62 | - policy = Policy(mock_context, Config2) | ||
63 | - self.assertFalse(policy.exists()) | ||
64 | - | ||
65 | - def test_create(self): | ||
66 | - mock_context = mock.Mock() | ||
67 | - policy = Policy(mock_context, Config2) | ||
68 | - policy.create() | ||
69 | - | ||
70 | - def test_delete(self): | ||
71 | - mock_context = mock.Mock() | ||
72 | - policy = Policy(mock_context, Config1) | ||
73 | - policy.delete() |
tests/unit/test_role.py
deleted
100644 → 0
1 | -# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/ | ||
2 | -# | ||
3 | -# Licensed under the Apache License, Version 2.0 (the "License"). You | ||
4 | -# may not use this file except in compliance with the License. A copy of | ||
5 | -# the License is located at | ||
6 | -# | ||
7 | -# http://aws.amazon.com/apache2.0/ | ||
8 | -# | ||
9 | -# or in the "license" file accompanying this file. This file is | ||
10 | -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF | ||
11 | -# ANY KIND, either express or implied. See the License for the specific | ||
12 | -# language governing permissions and limitations under the License. | ||
13 | - | ||
14 | -import unittest | ||
15 | - | ||
16 | -import mock | ||
17 | - | ||
18 | -from kappa.role import Role | ||
19 | -from tests.unit.mock_aws import get_aws | ||
20 | - | ||
21 | -Config1 = {'name': 'FooRole'} | ||
22 | - | ||
23 | -Config2 = {'name': 'BazRole'} | ||
24 | - | ||
25 | - | ||
26 | -class TestRole(unittest.TestCase): | ||
27 | - | ||
28 | - def setUp(self): | ||
29 | - self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws) | ||
30 | - self.mock_aws = self.aws_patch.start() | ||
31 | - | ||
32 | - def tearDown(self): | ||
33 | - self.aws_patch.stop() | ||
34 | - | ||
35 | - def test_properties(self): | ||
36 | - mock_context = mock.Mock() | ||
37 | - role = Role(mock_context, Config1) | ||
38 | - self.assertEqual(role.name, Config1['name']) | ||
39 | - | ||
40 | - def test_exists(self): | ||
41 | - mock_context = mock.Mock() | ||
42 | - role = Role(mock_context, Config1) | ||
43 | - self.assertTrue(role.exists()) | ||
44 | - | ||
45 | - def test_not_exists(self): | ||
46 | - mock_context = mock.Mock() | ||
47 | - role = Role(mock_context, Config2) | ||
48 | - self.assertFalse(role.exists()) | ||
49 | - | ||
50 | - def test_create(self): | ||
51 | - mock_context = mock.Mock() | ||
52 | - role = Role(mock_context, Config2) | ||
53 | - role.create() | ||
54 | - | ||
55 | - def test_delete(self): | ||
56 | - mock_context = mock.Mock() | ||
57 | - role = Role(mock_context, Config1) | ||
58 | - role.delete() |
-
Please register or login to post a comment