Mitch Garnaat

Merge pull request #14 from garnaat/update-for-ga

Update for ga
...@@ -12,37 +12,52 @@ There are quite a few steps involved in developing a Lambda function. ...@@ -12,37 +12,52 @@ There are quite a few steps involved in developing a Lambda function.
12 You have to: 12 You have to:
13 13
14 * Write the function itself (Javascript only for now) 14 * Write the function itself (Javascript only for now)
15 -* Create the IAM roles required by the Lambda function itself (the executing 15 +* Create the IAM role required by the Lambda function itself (the executing
16 -role) as well as the policy required by whoever is invoking the Lambda 16 +role) to allow it access to any resources it needs to do its job
17 -function (the invocation role) 17 +* Add additional permissions to the Lambda function if it is going to be used
18 +in a Push model (e.g. S3, SNS) rather than a Pull model.
18 * Zip the function and any dependencies and upload it to AWS Lambda 19 * Zip the function and any dependencies and upload it to AWS Lambda
19 * Test the function with mock data 20 * Test the function with mock data
20 * Retrieve the output of the function from CloudWatch Logs 21 * Retrieve the output of the function from CloudWatch Logs
21 * Add an event source to the function 22 * Add an event source to the function
22 * View the output of the live function 23 * View the output of the live function
23 24
24 -Kappa tries to help you with some of this. The IAM roles are created 25 +Kappa tries to help you with some of this. It allows you to create an IAM
25 -in a CloudFormation template and kappa takes care of creating, updating, and 26 +managed policy or use an existing one. It creates the IAM execution role for
26 -deleting the CloudFormation stack. Kappa will also zip up the function and 27 +you and associates the policy with it. Kappa will zip up the function and
27 any dependencies and upload them to AWS Lambda. It also sends test data 28 any dependencies and upload them to AWS Lambda. It also sends test data
28 to the uploaded function and finds the related CloudWatch log stream and 29 to the uploaded function and finds the related CloudWatch log stream and
29 displays the log events. Finally, it will add the event source to turn 30 displays the log events. Finally, it will add the event source to turn
30 your function on. 31 your function on.
31 32
33 +If you need to make changes, kappa will allow you to easily update your Lambda
34 +function with new code or update your event sources as needed.
35 +
36 +Getting Started
37 +---------------
38 +
32 Kappa is a command line tool. The basic command format is: 39 Kappa is a command line tool. The basic command format is:
33 40
34 kappa <path to config file> <command> [optional command args] 41 kappa <path to config file> <command> [optional command args]
35 42
36 Where ``command`` is one of: 43 Where ``command`` is one of:
37 44
38 -* deploy - deploy the CloudFormation template containing the IAM roles and zip 45 +* create - creates the IAM policy (if necessary), the IAM role, and zips and
39 - the function and upload it to AWS Lambda 46 + uploads the Lambda function code to the Lambda service
40 -* test - send test data to the new Lambda function 47 +* invoke - make a synchronous call to your Lambda function, passing test data
48 + and display the resulting log data
49 +* invoke_async - make an asynchronous call to your Lambda function passing test
50 + data.
51 +* dryrun - make the call but only check things like permissions and report
52 + back. Don't actually run the code.
41 * tail - display the most recent log events for the function (remember that it 53 * tail - display the most recent log events for the function (remember that it
42 can take several minutes before log events are available from CloudWatch) 54 can take several minutes before log events are available from CloudWatch)
43 * add-event-sources - hook up an event source to your Lambda function 55 * add-event-sources - hook up an event source to your Lambda function
44 -* delete - delete the CloudFormation stack containing the IAM roles and delete 56 +* delete - delete the Lambda function, remove any event sources, delete the IAM
45 - the Lambda function 57 + policy and role
58 +* update_code - Upload new code for your Lambda function
59 +* update_event_sources - Update the event sources based on the information in
60 + your kappa config file
46 * status - display summary information about functions, stacks, and event 61 * status - display summary information about functions, stacks, and event
47 sources related to your project. 62 sources related to your project.
48 63
...@@ -58,14 +73,12 @@ An example project based on a Kinesis stream can be found in ...@@ -58,14 +73,12 @@ An example project based on a Kinesis stream can be found in
58 The basic workflow is: 73 The basic workflow is:
59 74
60 * Create your Lambda function 75 * Create your Lambda function
61 -* Create your CloudFormation template with the execution and invocation roles 76 +* Create any custom IAM policy you need to execute your Lambda function
62 * Create some sample data 77 * Create some sample data
63 * Create the YAML config file with all of the information 78 * Create the YAML config file with all of the information
64 -* Run ``kappa <path-to-config> deploy`` to create roles and upload function 79 +* Run ``kappa <path-to-config> create`` to create roles and upload function
65 -* Run ``kappa <path-to-config> test`` to invoke the function with test data 80 +* Run ``kappa <path-to-config> invoke`` to invoke the function with test data
66 -* Run ``kappa <path-to-config> tail`` to view the functions output in CloudWatch logs 81 +* Run ``kappa <path-to-config> update_code`` to upload new code for your Lambda
82 + function
67 * Run ``kappa <path-to-config> add-event-source`` to hook your function up to the event source 83 * Run ``kappa <path-to-config> add-event-source`` to hook your function up to the event source
68 * Run ``kappa <path-to-config> tail`` to see more output 84 * Run ``kappa <path-to-config> tail`` to see more output
69 -
70 -If you have to make changes in your function or in your IAM roles, simply run
71 -``kappa deploy`` again and the changes will be uploaded as necessary.
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
13 # language governing permissions and limitations under the License. 13 # language governing permissions and limitations under the License.
14 from datetime import datetime 14 from datetime import datetime
15 import logging 15 import logging
16 +import base64
16 17
17 import click 18 import click
18 19
...@@ -38,18 +39,46 @@ def cli(ctx, config=None, debug=False): ...@@ -38,18 +39,46 @@ def cli(ctx, config=None, debug=False):
38 39
39 @cli.command() 40 @cli.command()
40 @click.pass_context 41 @click.pass_context
41 -def deploy(ctx): 42 +def create(ctx):
42 context = Context(ctx.obj['config'], ctx.obj['debug']) 43 context = Context(ctx.obj['config'], ctx.obj['debug'])
43 - click.echo('deploying...') 44 + click.echo('creating...')
44 - context.deploy() 45 + context.create()
45 click.echo('...done') 46 click.echo('...done')
46 47
47 @cli.command() 48 @cli.command()
48 @click.pass_context 49 @click.pass_context
49 -def test(ctx): 50 +def update_code(ctx):
50 context = Context(ctx.obj['config'], ctx.obj['debug']) 51 context = Context(ctx.obj['config'], ctx.obj['debug'])
51 - click.echo('testing...') 52 + click.echo('updating code...')
52 - context.test() 53 + context.update_code()
54 + click.echo('...done')
55 +
56 +@cli.command()
57 +@click.pass_context
58 +def invoke(ctx):
59 + context = Context(ctx.obj['config'], ctx.obj['debug'])
60 + click.echo('invoking...')
61 + response = context.invoke()
62 + log_data = base64.b64decode(response['LogResult'])
63 + click.echo(log_data)
64 + click.echo('...done')
65 +
66 +@cli.command()
67 +@click.pass_context
68 +def dryrun(ctx):
69 + context = Context(ctx.obj['config'], ctx.obj['debug'])
70 + click.echo('invoking dryrun...')
71 + response = context.dryrun()
72 + click.echo(response)
73 + click.echo('...done')
74 +
75 +@cli.command()
76 +@click.pass_context
77 +def invoke_async(ctx):
78 + context = Context(ctx.obj['config'], ctx.obj['debug'])
79 + click.echo('invoking async...')
80 + response = context.invoke_async()
81 + click.echo(response)
53 click.echo('...done') 82 click.echo('...done')
54 83
55 @cli.command() 84 @cli.command()
...@@ -67,33 +96,35 @@ def tail(ctx): ...@@ -67,33 +96,35 @@ def tail(ctx):
67 def status(ctx): 96 def status(ctx):
68 context = Context(ctx.obj['config'], ctx.obj['debug']) 97 context = Context(ctx.obj['config'], ctx.obj['debug'])
69 status = context.status() 98 status = context.status()
70 - click.echo(click.style('Stack', bold=True)) 99 + click.echo(click.style('Policy', bold=True))
71 - if status['stack']: 100 + if status['policy']:
72 - for stack in status['stack']['Stacks']: 101 + line = ' {} ({})'.format(
73 - line = ' {}: {}'.format(stack['StackId'], stack['StackStatus']) 102 + status['policy']['PolicyName'],
74 - click.echo(click.style(line, fg='green')) 103 + status['policy']['Arn'])
75 - else: 104 + click.echo(click.style(line, fg='green'))
76 - click.echo(click.style(' None', fg='green')) 105 + click.echo(click.style('Role', bold=True))
106 + if status['role']:
107 + line = ' {} ({})'.format(
108 + status['role']['Role']['RoleName'],
109 + status['role']['Role']['Arn'])
110 + click.echo(click.style(line, fg='green'))
77 click.echo(click.style('Function', bold=True)) 111 click.echo(click.style('Function', bold=True))
78 if status['function']: 112 if status['function']:
79 - line = ' {}'.format( 113 + line = ' {} ({})'.format(
80 - status['function']['Configuration']['FunctionName']) 114 + status['function']['Configuration']['FunctionName'],
115 + status['function']['Configuration']['FunctionArn'])
81 click.echo(click.style(line, fg='green')) 116 click.echo(click.style(line, fg='green'))
82 else: 117 else:
83 click.echo(click.style(' None', fg='green')) 118 click.echo(click.style(' None', fg='green'))
84 click.echo(click.style('Event Sources', bold=True)) 119 click.echo(click.style('Event Sources', bold=True))
85 if status['event_sources']: 120 if status['event_sources']:
86 for event_source in status['event_sources']: 121 for event_source in status['event_sources']:
87 - if 'EventSource' in event_source: 122 + if event_source:
88 line = ' {}: {}'.format( 123 line = ' {}: {}'.format(
89 - event_source['EventSource'], event_source['IsActive']) 124 + event_source['EventSourceArn'], event_source['State'])
90 click.echo(click.style(line, fg='green')) 125 click.echo(click.style(line, fg='green'))
91 else: 126 else:
92 - line = ' {}'.format( 127 + click.echo(click.style(' None', fg='green'))
93 - event_source['CloudFunctionConfiguration']['Id'])
94 - click.echo(click.style(line, fg='green'))
95 - else:
96 - click.echo(click.style(' None', fg='green'))
97 128
98 @cli.command() 129 @cli.command()
99 @click.pass_context 130 @click.pass_context
...@@ -111,6 +142,14 @@ def add_event_sources(ctx): ...@@ -111,6 +142,14 @@ def add_event_sources(ctx):
111 context.add_event_sources() 142 context.add_event_sources()
112 click.echo('...done') 143 click.echo('...done')
113 144
145 +@cli.command()
146 +@click.pass_context
147 +def update_event_sources(ctx):
148 + context = Context(ctx.obj['config'], ctx.obj['debug'])
149 + click.echo('updating event sources...')
150 + context.update_event_sources()
151 + click.echo('...done')
152 +
114 153
115 if __name__ == '__main__': 154 if __name__ == '__main__':
116 cli(obj={}) 155 cli(obj={})
......
1 -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/ 1 +# Copyright (c) 2014,2015 Mitch Garnaat http://garnaat.org/
2 # 2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You 3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of 4 # may not use this file except in compliance with the License. A copy of
...@@ -11,21 +11,20 @@ ...@@ -11,21 +11,20 @@
11 # ANY KIND, either express or implied. See the License for the specific 11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License. 12 # language governing permissions and limitations under the License.
13 13
14 -import botocore.session 14 +import boto3
15 15
16 16
17 class __AWS(object): 17 class __AWS(object):
18 18
19 - def __init__(self, profile=None, region=None): 19 + def __init__(self, profile_name=None, region_name=None):
20 self._client_cache = {} 20 self._client_cache = {}
21 - self._session = botocore.session.get_session() 21 + self._session = boto3.session.Session(
22 - self._session.profile = profile 22 + region_name=region_name, profile_name=profile_name)
23 - self._region = region
24 23
25 def create_client(self, client_name): 24 def create_client(self, client_name):
26 if client_name not in self._client_cache: 25 if client_name not in self._client_cache:
27 - self._client_cache[client_name] = self._session.create_client( 26 + self._client_cache[client_name] = self._session.client(
28 - client_name, self._region) 27 + client_name)
29 return self._client_cache[client_name] 28 return self._client_cache[client_name]
30 29
31 30
......
...@@ -13,10 +13,12 @@ ...@@ -13,10 +13,12 @@
13 13
14 import logging 14 import logging
15 import yaml 15 import yaml
16 +import time
16 17
17 import kappa.function 18 import kappa.function
18 import kappa.event_source 19 import kappa.event_source
19 -import kappa.stack 20 +import kappa.policy
21 +import kappa.role
20 22
21 LOG = logging.getLogger(__name__) 23 LOG = logging.getLogger(__name__)
22 24
...@@ -32,8 +34,16 @@ class Context(object): ...@@ -32,8 +34,16 @@ class Context(object):
32 else: 34 else:
33 self.set_logger('kappa', logging.INFO) 35 self.set_logger('kappa', logging.INFO)
34 self.config = yaml.load(config_file) 36 self.config = yaml.load(config_file)
35 - self._stack = kappa.stack.Stack( 37 + if 'policy' in self.config.get('iam', ''):
36 - self, self.config['cloudformation']) 38 + self.policy = kappa.policy.Policy(
39 + self, self.config['iam']['policy'])
40 + else:
41 + self.policy = None
42 + if 'role' in self.config.get('iam', ''):
43 + self.role = kappa.role.Role(
44 + self, self.config['iam']['role'])
45 + else:
46 + self.role = None
37 self.function = kappa.function.Function( 47 self.function = kappa.function.Function(
38 self, self.config['lambda']) 48 self, self.config['lambda'])
39 self.event_sources = [] 49 self.event_sources = []
...@@ -57,11 +67,7 @@ class Context(object): ...@@ -57,11 +67,7 @@ class Context(object):
57 67
58 @property 68 @property
59 def exec_role_arn(self): 69 def exec_role_arn(self):
60 - return self._stack.exec_role_arn 70 + return self.role.arn
61 -
62 - @property
63 - def invoke_role_arn(self):
64 - return self._stack.invoke_role_arn
65 71
66 def debug(self): 72 def debug(self):
67 self.set_logger('kappa', logging.DEBUG) 73 self.set_logger('kappa', logging.DEBUG)
...@@ -90,44 +96,88 @@ class Context(object): ...@@ -90,44 +96,88 @@ class Context(object):
90 log.addHandler(ch) 96 log.addHandler(ch)
91 97
92 def _create_event_sources(self): 98 def _create_event_sources(self):
93 - for event_source_cfg in self.config['lambda']['event_sources']: 99 + if 'event_sources' in self.config['lambda']:
94 - _, _, svc, _ = event_source_cfg['arn'].split(':', 3) 100 + for event_source_cfg in self.config['lambda']['event_sources']:
95 - if svc == 'kinesis': 101 + _, _, svc, _ = event_source_cfg['arn'].split(':', 3)
96 - self.event_sources.append( 102 + if svc == 'kinesis':
97 - kappa.event_source.KinesisEventSource( 103 + self.event_sources.append(
104 + kappa.event_source.KinesisEventSource(
105 + self, event_source_cfg))
106 + elif svc == 's3':
107 + self.event_sources.append(kappa.event_source.S3EventSource(
98 self, event_source_cfg)) 108 self, event_source_cfg))
99 - elif svc == 's3': 109 + elif svc == 'sns':
100 - self.event_sources.append(kappa.event_source.S3EventSource( 110 + self.event_sources.append(
101 - self, event_source_cfg)) 111 + kappa.event_source.SNSEventSource(
102 - else: 112 + self, event_source_cfg))
103 - msg = 'Unsupported event source: %s' % event_source_cfg['arn'] 113 + elif svc == 'dynamodb':
104 - raise ValueError(msg) 114 + self.event_sources.append(
115 + kappa.event_source.DynamoDBStreamEventSource(
116 + self, event_source_cfg))
117 + else:
118 + msg = 'Unknown event source: %s' % event_source_cfg['arn']
119 + raise ValueError(msg)
105 120
106 def add_event_sources(self): 121 def add_event_sources(self):
107 for event_source in self.event_sources: 122 for event_source in self.event_sources:
108 event_source.add(self.function) 123 event_source.add(self.function)
109 124
110 - def deploy(self): 125 + def update_event_sources(self):
111 - self._stack.update() 126 + for event_source in self.event_sources:
112 - self.function.upload() 127 + event_source.update(self.function)
128 +
129 + def create(self):
130 + if self.policy:
131 + self.policy.create()
132 + if self.role:
133 + self.role.create()
134 + # There is a consistency problem here.
135 + # If you don't wait for a bit, the function.create call
136 + # will fail because the policy has not been attached to the role.
137 + LOG.debug('Waiting for policy/role propogation')
138 + time.sleep(5)
139 + self.function.create()
140 +
141 + def update_code(self):
142 + self.function.update()
113 143
114 - def test(self): 144 + def invoke(self):
115 - self.function.test() 145 + return self.function.invoke()
146 +
147 + def dryrun(self):
148 + return self.function.dryrun()
149 +
150 + def invoke_async(self):
151 + return self.function.invoke_async()
116 152
117 def tail(self): 153 def tail(self):
118 return self.function.tail() 154 return self.function.tail()
119 155
120 def delete(self): 156 def delete(self):
121 - self._stack.delete()
122 - self.function.delete()
123 for event_source in self.event_sources: 157 for event_source in self.event_sources:
124 event_source.remove(self.function) 158 event_source.remove(self.function)
159 + self.function.delete()
160 + time.sleep(5)
161 + if self.role:
162 + self.role.delete()
163 + time.sleep(5)
164 + if self.policy:
165 + self.policy.delete()
125 166
126 def status(self): 167 def status(self):
127 status = {} 168 status = {}
128 - status['stack'] = self._stack.status() 169 + if self.policy:
170 + status['policy'] = self.policy.status()
171 + else:
172 + status['policy'] = None
173 + if self.role:
174 + status['role'] = self.role.status()
175 + else:
176 + status['role'] = None
129 status['function'] = self.function.status() 177 status['function'] = self.function.status()
130 status['event_sources'] = [] 178 status['event_sources'] = []
131 - for event_source in self.event_sources: 179 + if self.event_sources:
132 - status['event_sources'].append(event_source.status(self.function)) 180 + for event_source in self.event_sources:
181 + status['event_sources'].append(
182 + event_source.status(self.function))
133 return status 183 return status
......
...@@ -31,9 +31,17 @@ class EventSource(object): ...@@ -31,9 +31,17 @@ class EventSource(object):
31 return self._config['arn'] 31 return self._config['arn']
32 32
33 @property 33 @property
34 + def starting_position(self):
35 + return self._config.get('starting_position', 'TRIM_HORIZON')
36 +
37 + @property
34 def batch_size(self): 38 def batch_size(self):
35 return self._config.get('batch_size', 100) 39 return self._config.get('batch_size', 100)
36 40
41 + @property
42 + def enabled(self):
43 + return self._config.get('enabled', True)
44 +
37 45
38 class KinesisEventSource(EventSource): 46 class KinesisEventSource(EventSource):
39 47
...@@ -44,46 +52,71 @@ class KinesisEventSource(EventSource): ...@@ -44,46 +52,71 @@ class KinesisEventSource(EventSource):
44 52
45 def _get_uuid(self, function): 53 def _get_uuid(self, function):
46 uuid = None 54 uuid = None
47 - response = self._lambda.list_event_sources( 55 + response = self._lambda.list_event_source_mappings(
48 FunctionName=function.name, 56 FunctionName=function.name,
49 EventSourceArn=self.arn) 57 EventSourceArn=self.arn)
50 LOG.debug(response) 58 LOG.debug(response)
51 - if len(response['EventSources']) > 0: 59 + if len(response['EventSourceMappings']) > 0:
52 - uuid = response['EventSources'][0]['UUID'] 60 + uuid = response['EventSourceMappings'][0]['UUID']
53 return uuid 61 return uuid
54 62
55 def add(self, function): 63 def add(self, function):
56 try: 64 try:
57 - response = self._lambda.add_event_source( 65 + response = self._lambda.create_event_source_mapping(
58 FunctionName=function.name, 66 FunctionName=function.name,
59 - Role=self._context.invoke_role_arn, 67 + EventSourceArn=self.arn,
60 - EventSource=self.arn, 68 + BatchSize=self.batch_size,
61 - BatchSize=self.batch_size) 69 + StartingPosition=self.starting_position,
70 + Enabled=self.enabled
71 + )
62 LOG.debug(response) 72 LOG.debug(response)
63 except Exception: 73 except Exception:
64 - LOG.exception('Unable to add Kinesis event source') 74 + LOG.exception('Unable to add event source')
75 +
76 + def update(self, function):
77 + response = None
78 + uuid = self._get_uuid(function)
79 + if uuid:
80 + try:
81 + response = self._lambda.update_event_source_mapping(
82 + BatchSize=self.batch_size,
83 + Enabled=self.enabled,
84 + FunctionName=function.arn)
85 + LOG.debug(response)
86 + except Exception:
87 + LOG.exception('Unable to update event source')
65 88
66 def remove(self, function): 89 def remove(self, function):
67 response = None 90 response = None
68 uuid = self._get_uuid(function) 91 uuid = self._get_uuid(function)
69 if uuid: 92 if uuid:
70 - response = self._lambda.remove_event_source( 93 + response = self._lambda.delete_event_source_mapping(
71 UUID=uuid) 94 UUID=uuid)
72 LOG.debug(response) 95 LOG.debug(response)
73 return response 96 return response
74 97
75 def status(self, function): 98 def status(self, function):
99 + response = None
76 LOG.debug('getting status for event source %s', self.arn) 100 LOG.debug('getting status for event source %s', self.arn)
77 - try: 101 + uuid = self._get_uuid(function)
78 - response = self._lambda.get_event_source( 102 + if uuid:
79 - UUID=self._get_uuid(function)) 103 + try:
80 - LOG.debug(response) 104 + response = self._lambda.get_event_source_mapping(
81 - except ClientError: 105 + UUID=self._get_uuid(function))
82 - LOG.debug('event source %s does not exist', self.arn) 106 + LOG.debug(response)
83 - response = None 107 + except ClientError:
108 + LOG.debug('event source %s does not exist', self.arn)
109 + response = None
110 + else:
111 + LOG.debug('No UUID for event source %s', self.arn)
84 return response 112 return response
85 113
86 114
115 +class DynamoDBStreamEventSource(KinesisEventSource):
116 +
117 + pass
118 +
119 +
87 class S3EventSource(EventSource): 120 class S3EventSource(EventSource):
88 121
89 def __init__(self, context, config): 122 def __init__(self, context, config):
...@@ -134,3 +167,50 @@ class S3EventSource(EventSource): ...@@ -134,3 +167,50 @@ class S3EventSource(EventSource):
134 if 'CloudFunctionConfiguration' not in response: 167 if 'CloudFunctionConfiguration' not in response:
135 response = None 168 response = None
136 return response 169 return response
170 +
171 +
172 +class SNSEventSource(EventSource):
173 +
174 + def __init__(self, context, config):
175 + super(SNSEventSource, self).__init__(context, config)
176 + aws = kappa.aws.get_aws(context)
177 + self._sns = aws.create_client('sns')
178 +
179 + def _make_notification_id(self, function_name):
180 + return 'Kappa-%s-notification' % function_name
181 +
182 + def exists(self, function):
183 + try:
184 + response = self._sns.list_subscriptions_by_topic(
185 + TopicArn=self.arn)
186 + LOG.debug(response)
187 + for subscription in response['Subscriptions']:
188 + if subscription['Endpoint'] == function.arn:
189 + return subscription
190 + return None
191 + except Exception:
192 + LOG.exception('Unable to find event source %s', self.arn)
193 +
194 + def add(self, function):
195 + try:
196 + response = self._sns.subscribe(
197 + TopicArn=self.arn, Protocol='lambda',
198 + Endpoint=function.arn)
199 + LOG.debug(response)
200 + except Exception:
201 + LOG.exception('Unable to add SNS event source')
202 +
203 + def remove(self, function):
204 + LOG.debug('removing SNS event source')
205 + try:
206 + subscription = self.exists(function)
207 + if subscription:
208 + response = self._sns.unsubscribe(
209 + SubscriptionArn=subscription['SubscriptionArn'])
210 + LOG.debug(response)
211 + except Exception:
212 + LOG.exception('Unable to remove event source %s', self.arn)
213 +
214 + def status(self, function):
215 + LOG.debug('status for SNS notification for %s', function.name)
216 + return self.exist(function)
......
...@@ -46,10 +46,6 @@ class Function(object): ...@@ -46,10 +46,6 @@ class Function(object):
46 return self._config['handler'] 46 return self._config['handler']
47 47
48 @property 48 @property
49 - def mode(self):
50 - return self._config['mode']
51 -
52 - @property
53 def description(self): 49 def description(self):
54 return self._config['description'] 50 return self._config['description']
55 51
...@@ -74,13 +70,17 @@ class Function(object): ...@@ -74,13 +70,17 @@ class Function(object):
74 return self._config['test_data'] 70 return self._config['test_data']
75 71
76 @property 72 @property
73 + def permissions(self):
74 + return self._config.get('permissions', list())
75 +
76 + @property
77 def arn(self): 77 def arn(self):
78 if self._arn is None: 78 if self._arn is None:
79 try: 79 try:
80 - response = self._lambda_svc.get_function_configuration( 80 + response = self._lambda_svc.get_function(
81 FunctionName=self.name) 81 FunctionName=self.name)
82 LOG.debug(response) 82 LOG.debug(response)
83 - self._arn = response['FunctionARN'] 83 + self._arn = response['Configuration']['FunctionArn']
84 except Exception: 84 except Exception:
85 LOG.debug('Unable to find ARN for function: %s', self.name) 85 LOG.debug('Unable to find ARN for function: %s', self.name)
86 return self._arn 86 return self._arn
...@@ -124,30 +124,68 @@ class Function(object): ...@@ -124,30 +124,68 @@ class Function(object):
124 else: 124 else:
125 self._zip_lambda_file(zipfile_name, lambda_fn) 125 self._zip_lambda_file(zipfile_name, lambda_fn)
126 126
127 - def upload(self): 127 + def add_permissions(self):
128 - LOG.debug('uploading %s', self.zipfile_name) 128 + for permission in self.permissions:
129 + try:
130 + kwargs = {
131 + 'FunctionName': self.name,
132 + 'StatementId': permission['statement_id'],
133 + 'Action': permission['action'],
134 + 'Principal': permission['principal']}
135 + source_arn = permission.get('source_arn', None)
136 + if source_arn:
137 + kwargs['SourceArn'] = source_arn
138 + source_account = permission.get('source_account', None)
139 + if source_account:
140 + kwargs['SourceAccount'] = source_account
141 + response = self._lambda_svc.add_permission(**kwargs)
142 + LOG.debug(response)
143 + except Exception:
144 + LOG.exception('Unable to add permission')
145 +
146 + def create(self):
147 + LOG.debug('creating %s', self.zipfile_name)
129 self.zip_lambda_function(self.zipfile_name, self.path) 148 self.zip_lambda_function(self.zipfile_name, self.path)
130 with open(self.zipfile_name, 'rb') as fp: 149 with open(self.zipfile_name, 'rb') as fp:
131 exec_role = self._context.exec_role_arn 150 exec_role = self._context.exec_role_arn
151 + LOG.debug('exec_role=%s', exec_role)
132 try: 152 try:
133 - response = self._lambda_svc.upload_function( 153 + zipdata = fp.read()
154 + response = self._lambda_svc.create_function(
134 FunctionName=self.name, 155 FunctionName=self.name,
135 - FunctionZip=fp, 156 + Code={'ZipFile': zipdata},
136 Runtime=self.runtime, 157 Runtime=self.runtime,
137 Role=exec_role, 158 Role=exec_role,
138 Handler=self.handler, 159 Handler=self.handler,
139 - Mode=self.mode,
140 Description=self.description, 160 Description=self.description,
141 Timeout=self.timeout, 161 Timeout=self.timeout,
142 MemorySize=self.memory_size) 162 MemorySize=self.memory_size)
143 LOG.debug(response) 163 LOG.debug(response)
144 except Exception: 164 except Exception:
145 LOG.exception('Unable to upload zip file') 165 LOG.exception('Unable to upload zip file')
166 + self.add_permissions()
167 +
168 + def update(self):
169 + LOG.debug('updating %s', self.zipfile_name)
170 + self.zip_lambda_function(self.zipfile_name, self.path)
171 + with open(self.zipfile_name, 'rb') as fp:
172 + try:
173 + zipdata = fp.read()
174 + response = self._lambda_svc.update_function_code(
175 + FunctionName=self.name,
176 + ZipFile=zipdata)
177 + LOG.debug(response)
178 + except Exception:
179 + LOG.exception('Unable to update zip file')
146 180
147 def delete(self): 181 def delete(self):
148 LOG.debug('deleting function %s', self.name) 182 LOG.debug('deleting function %s', self.name)
149 - response = self._lambda_svc.delete_function(FunctionName=self.name) 183 + response = None
150 - LOG.debug(response) 184 + try:
185 + response = self._lambda_svc.delete_function(FunctionName=self.name)
186 + LOG.debug(response)
187 + except ClientError:
188 + LOG.debug('function %s: not found', self.name)
151 return response 189 return response
152 190
153 def status(self): 191 def status(self):
...@@ -169,5 +207,24 @@ class Function(object): ...@@ -169,5 +207,24 @@ class Function(object):
169 InvokeArgs=fp) 207 InvokeArgs=fp)
170 LOG.debug(response) 208 LOG.debug(response)
171 209
172 - def test(self): 210 + def _invoke(self, test_data, invocation_type):
173 - self.invoke_asynch(self.test_data) 211 + if test_data is None:
212 + test_data = self.test_data
213 + LOG.debug('invoke %s', test_data)
214 + with open(test_data) as fp:
215 + response = self._lambda_svc.invoke(
216 + FunctionName=self.name,
217 + InvocationType=invocation_type,
218 + LogType='Tail',
219 + Payload=fp.read())
220 + LOG.debug(response)
221 + return response
222 +
223 + def invoke(self, test_data=None):
224 + return self._invoke(test_data, 'RequestResponse')
225 +
226 + def invoke_async(self, test_data=None):
227 + return self._invoke(test_data, 'Event')
228 +
229 + def dryrun(self, test_data=None):
230 + return self._invoke(test_data, 'DryRun')
......
1 +# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
2 +#
3 +# Licensed under the Apache License, Version 2.0 (the "License"). You
4 +# may not use this file except in compliance with the License. A copy of
5 +# the License is located at
6 +#
7 +# http://aws.amazon.com/apache2.0/
8 +#
9 +# or in the "license" file accompanying this file. This file is
10 +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 +# ANY KIND, either express or implied. See the License for the specific
12 +# language governing permissions and limitations under the License.
13 +
14 +import logging
15 +
16 +import kappa.aws
17 +
18 +LOG = logging.getLogger(__name__)
19 +
20 +
21 +class Policy(object):
22 +
23 + Path = '/kappa/'
24 +
25 + def __init__(self, context, config):
26 + self._context = context
27 + self._config = config
28 + aws = kappa.aws.get_aws(context)
29 + self._iam_svc = aws.create_client('iam')
30 + self._arn = None
31 +
32 + @property
33 + def name(self):
34 + return self._config['name']
35 +
36 + @property
37 + def description(self):
38 + return self._config.get('description', None)
39 +
40 + @property
41 + def document(self):
42 + return self._config['document']
43 +
44 + @property
45 + def arn(self):
46 + if self._arn is None:
47 + policy = self.exists()
48 + if policy:
49 + self._arn = policy.get('Arn', None)
50 + return self._arn
51 +
52 + def exists(self):
53 + try:
54 + response = self._iam_svc.list_policies(PathPrefix=self.Path)
55 + LOG.debug(response)
56 + for policy in response['Policies']:
57 + if policy['PolicyName'] == self.name:
58 + return policy
59 + except Exception:
60 + LOG.exception('Error listing policies')
61 + return None
62 +
63 + def create(self):
64 + LOG.debug('creating policy %s', self.name)
65 + policy = self.exists()
66 + if not policy:
67 + with open(self.document, 'rb') as fp:
68 + try:
69 + response = self._iam_svc.create_policy(
70 + Path=self.Path, PolicyName=self.name,
71 + PolicyDocument=fp.read(),
72 + Description=self.description)
73 + LOG.debug(response)
74 + except Exception:
75 + LOG.exception('Error creating Policy')
76 +
77 + def delete(self):
78 + response = None
79 + if self.arn:
80 + LOG.debug('deleting policy %s', self.name)
81 + response = self._iam_svc.delete_policy(PolicyArn=self.arn)
82 + LOG.debug(response)
83 + return response
84 +
85 + def status(self):
86 + LOG.debug('getting status for policy %s', self.name)
87 + return self.exists()
1 +# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
2 +#
3 +# Licensed under the Apache License, Version 2.0 (the "License"). You
4 +# may not use this file except in compliance with the License. A copy of
5 +# the License is located at
6 +#
7 +# http://aws.amazon.com/apache2.0/
8 +#
9 +# or in the "license" file accompanying this file. This file is
10 +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 +# ANY KIND, either express or implied. See the License for the specific
12 +# language governing permissions and limitations under the License.
13 +
14 +import logging
15 +
16 +from botocore.exceptions import ClientError
17 +
18 +import kappa.aws
19 +
20 +LOG = logging.getLogger(__name__)
21 +
22 +
23 +AssumeRolePolicyDocument = """{
24 + "Version" : "2012-10-17",
25 + "Statement": [ {
26 + "Effect": "Allow",
27 + "Principal": {
28 + "Service": [ "lambda.amazonaws.com" ]
29 + },
30 + "Action": [ "sts:AssumeRole" ]
31 + } ]
32 +}"""
33 +
34 +
35 +class Role(object):
36 +
37 + Path = '/kappa/'
38 +
39 + def __init__(self, context, config):
40 + self._context = context
41 + self._config = config
42 + aws = kappa.aws.get_aws(context)
43 + self._iam_svc = aws.create_client('iam')
44 + self._arn = None
45 +
46 + @property
47 + def name(self):
48 + return self._config['name']
49 +
50 + @property
51 + def arn(self):
52 + if self._arn is None:
53 + try:
54 + response = self._iam_svc.get_role(
55 + RoleName=self.name)
56 + LOG.debug(response)
57 + self._arn = response['Role']['Arn']
58 + except Exception:
59 + LOG.debug('Unable to find ARN for role: %s', self.name)
60 + return self._arn
61 +
62 + def exists(self):
63 + try:
64 + response = self._iam_svc.list_roles(PathPrefix=self.Path)
65 + LOG.debug(response)
66 + for role in response['Roles']:
67 + if role['RoleName'] == self.name:
68 + return role
69 + except Exception:
70 + LOG.exception('Error listing roles')
71 + return None
72 +
73 + def create(self):
74 + LOG.debug('creating role %s', self.name)
75 + role = self.exists()
76 + if not role:
77 + try:
78 + response = self._iam_svc.create_role(
79 + Path=self.Path, RoleName=self.name,
80 + AssumeRolePolicyDocument=AssumeRolePolicyDocument)
81 + LOG.debug(response)
82 + if self._context.policy:
83 + LOG.debug('attaching policy %s', self._context.policy.arn)
84 + response = self._iam_svc.attach_role_policy(
85 + RoleName=self.name,
86 + PolicyArn=self._context.policy.arn)
87 + LOG.debug(response)
88 + except ClientError:
89 + LOG.exception('Error creating Role')
90 +
91 + def delete(self):
92 + response = None
93 + LOG.debug('deleting role %s', self.name)
94 + try:
95 + LOG.debug('First detach the policy from the role')
96 + policy_arn = self._context.policy.arn
97 + if policy_arn:
98 + response = self._iam_svc.detach_role_policy(
99 + RoleName=self.name, PolicyArn=policy_arn)
100 + LOG.debug(response)
101 + response = self._iam_svc.delete_role(RoleName=self.name)
102 + LOG.debug(response)
103 + except ClientError:
104 + LOG.exception('role %s not found', self.name)
105 + return response
106 +
107 + def status(self):
108 + LOG.debug('getting status for role %s', self.name)
109 + try:
110 + response = self._iam_svc.get_role(RoleName=self.name)
111 + LOG.debug(response)
112 + except ClientError:
113 + LOG.debug('role %s not found', self.name)
114 + response = None
115 + return response
1 -# Copyright (c) 2014 Mitch Garnaat http://garnaat.org/
2 -#
3 -# Licensed under the Apache License, Version 2.0 (the "License"). You
4 -# may not use this file except in compliance with the License. A copy of
5 -# the License is located at
6 -#
7 -# http://aws.amazon.com/apache2.0/
8 -#
9 -# or in the "license" file accompanying this file. This file is
10 -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 -# ANY KIND, either express or implied. See the License for the specific
12 -# language governing permissions and limitations under the License.
13 -
14 -import logging
15 -import time
16 -
17 -import kappa.aws
18 -
19 -LOG = logging.getLogger(__name__)
20 -
21 -
22 -class Stack(object):
23 -
24 - completed_states = ('CREATE_COMPLETE', 'UPDATE_COMPLETE')
25 - failed_states = ('UPDATE_ROLLBACK_COMPLETE', 'ROLLBACK_COMPLETE')
26 -
27 - def __init__(self, context, config):
28 - self._context = context
29 - self._config = config
30 - aws = kappa.aws.get_aws(self._context)
31 - self._cfn = aws.create_client('cloudformation')
32 - self._iam = aws.create_client('iam')
33 -
34 - @property
35 - def name(self):
36 - return self._config['stack_name']
37 -
38 - @property
39 - def template_path(self):
40 - return self._config['template']
41 -
42 - @property
43 - def exec_role(self):
44 - return self._config['exec_role']
45 -
46 - @property
47 - def exec_role_arn(self):
48 - return self._get_role_arn(self.exec_role)
49 -
50 - @property
51 - def invoke_role(self):
52 - return self._config['invoke_role']
53 -
54 - @property
55 - def invoke_role_arn(self):
56 - return self._get_role_arn(self.invoke_role)
57 -
58 - def _get_role_arn(self, role_name):
59 - role_arn = None
60 - try:
61 - resources = self._cfn.list_stack_resources(
62 - StackName=self.name)
63 - LOG.debug(resources)
64 - except Exception:
65 - LOG.exception('Unable to find role ARN: %s', role_name)
66 - for resource in resources['StackResourceSummaries']:
67 - if resource['LogicalResourceId'] == role_name:
68 - role = self._iam.get_role(
69 - RoleName=resource['PhysicalResourceId'])
70 - LOG.debug(role)
71 - role_arn = role['Role']['Arn']
72 - LOG.debug('role_arn: %s', role_arn)
73 - return role_arn
74 -
75 - def exists(self):
76 - """
77 - Does Cloudformation Stack already exist?
78 - """
79 - try:
80 - response = self._cfn.describe_stacks(StackName=self.name)
81 - LOG.debug('Stack %s exists', self.name)
82 - except Exception:
83 - LOG.debug('Stack %s does not exist', self.name)
84 - response = None
85 - return response
86 -
87 - def wait(self):
88 - done = False
89 - while not done:
90 - time.sleep(1)
91 - response = self._cfn.describe_stacks(StackName=self.name)
92 - LOG.debug(response)
93 - status = response['Stacks'][0]['StackStatus']
94 - LOG.debug('Stack status is: %s', status)
95 - if status in self.completed_states:
96 - done = True
97 - if status in self.failed_states:
98 - msg = 'Could not create stack %s: %s' % (self.name, status)
99 - raise ValueError(msg)
100 -
101 - def _create(self):
102 - LOG.debug('create_stack: stack_name=%s', self.name)
103 - template_body = open(self.template_path).read()
104 - try:
105 - response = self._cfn.create_stack(
106 - StackName=self.name, TemplateBody=template_body,
107 - Capabilities=['CAPABILITY_IAM'])
108 - LOG.debug(response)
109 - except Exception:
110 - LOG.exception('Unable to create stack')
111 - self.wait()
112 -
113 - def _update(self):
114 - LOG.debug('create_stack: stack_name=%s', self.name)
115 - template_body = open(self.template_path).read()
116 - try:
117 - response = self._cfn.update_stack(
118 - StackName=self.name, TemplateBody=template_body,
119 - Capabilities=['CAPABILITY_IAM'])
120 - LOG.debug(response)
121 - except Exception as e:
122 - if 'ValidationError' in str(e):
123 - LOG.info('No Updates Required')
124 - else:
125 - LOG.exception('Unable to update stack')
126 - self.wait()
127 -
128 - def update(self):
129 - if self.exists():
130 - self._update()
131 - else:
132 - self._create()
133 -
134 - def status(self):
135 - return self.exists()
136 -
137 - def delete(self):
138 - LOG.debug('delete_stack: stack_name=%s', self.name)
139 - try:
140 - response = self._cfn.delete_stack(StackName=self.name)
141 - LOG.debug(response)
142 - except Exception:
143 - LOG.exception('Unable to delete stack: %s', self.name)
1 -botocore==0.94.0 1 +boto3==0.0.16
2 -click==3.3 2 +click==4.0
3 PyYAML>=3.11 3 PyYAML>=3.11
4 mock>=1.0.1 4 mock>=1.0.1
5 nose==1.3.1 5 nose==1.3.1
......
1 -console.log('Loading event'); 1 +console.log('Loading function');
2 +
2 exports.handler = function(event, context) { 3 exports.handler = function(event, context) {
3 - console.log(JSON.stringify(event, null, ' ')); 4 + console.log(JSON.stringify(event, null, 2));
4 - for(i = 0; i < event.Records.length; ++i) { 5 + event.Records.forEach(function(record) {
5 - encodedPayload = event.Records[i].kinesis.data; 6 + // Kinesis data is base64 encoded so decode here
6 - payload = new Buffer(encodedPayload, 'base64').toString('ascii'); 7 + payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
7 - console.log("Decoded payload: " + payload); 8 + console.log('Decoded payload:', payload);
8 - } 9 + });
9 - context.done(null, "Hello World"); // SUCCESS with message 10 + context.succeed();
10 }; 11 };
......
1 --- 1 ---
2 profile: personal 2 profile: personal
3 region: us-east-1 3 region: us-east-1
4 -cloudformation: 4 +iam:
5 - template: roles.cf 5 + role_name: KinesisSampleRole
6 - stack_name: TestKinesis 6 + role_policy: AWSLambdaKinesisExecutionRole
7 - exec_role: ExecRole
8 - invoke_role: InvokeRole
9 lambda: 7 lambda:
10 name: KinesisSample 8 name: KinesisSample
11 zipfile_name: KinesisSample.zip 9 zipfile_name: KinesisSample.zip
...@@ -15,9 +13,10 @@ lambda: ...@@ -15,9 +13,10 @@ lambda:
15 runtime: nodejs 13 runtime: nodejs
16 memory_size: 128 14 memory_size: 128
17 timeout: 3 15 timeout: 3
18 - mode: event
19 event_sources: 16 event_sources:
20 - 17 -
21 arn: arn:aws:kinesis:us-east-1:084307701560:stream/lambdastream 18 arn: arn:aws:kinesis:us-east-1:084307701560:stream/lambdastream
19 + starting_position: TRIM_HORIZON
20 + batch_size: 100
22 test_data: input.json 21 test_data: input.json
23 22
...\ No newline at end of file ...\ No newline at end of file
......
...@@ -12,8 +12,8 @@ ...@@ -12,8 +12,8 @@
12 "invokeIdentityArn": "arn:aws:iam::059493405231:role/testLEBRole", 12 "invokeIdentityArn": "arn:aws:iam::059493405231:role/testLEBRole",
13 "eventVersion": "1.0", 13 "eventVersion": "1.0",
14 "eventName": "aws:kinesis:record", 14 "eventName": "aws:kinesis:record",
15 - "eventSourceARN": "arn:aws:kinesis:us-east-1:35667example:stream/examplestream", 15 + "eventSourceARN": "arn:aws:kinesis:us-west-2:35667example:stream/examplestream",
16 - "awsRegion": "us-east-1" 16 + "awsRegion": "us-west-2"
17 } 17 }
18 ] 18 ]
19 } 19 }
......
1 +{
2 + "Version": "2012-10-17",
3 + "Statement":[
4 + {
5 + "Sid":"Stmt1428510662000",
6 + "Effect":"Allow",
7 + "Action":["dynamodb:*"],
8 + "Resource":["arn:aws:dynamodb:us-east-1:084307701560:table/snslambda"]
9 + }
10 + ]
11 +}
1 +---
2 +profile: personal
3 +region: us-east-1
4 +resources: resources.json
5 +iam:
6 + policy:
7 + description: A policy used with the Kappa SNS->DynamoDB example
8 + name: LambdaSNSSamplePolicy
9 + document: LambdaSNSSamplePolicy.json
10 + role:
11 + name: SNSSampleRole
12 + policy: LambdaSNSSamplePolicy
13 +lambda:
14 + name: SNSSample
15 + zipfile_name: SNSSample.zip
16 + description: Testing SNS -> DynamoDB Lambda handler
17 + path: messageStore.js
18 + handler: messageStore.handler
19 + runtime: nodejs
20 + memory_size: 128
21 + timeout: 3
22 + permissions:
23 + -
24 + statement_id: sns_invoke
25 + action: lambda:invokeFunction
26 + principal: sns.amazonaws.com
27 + source_arn: arn:aws:sns:us-east-1:084307701560:lambda_topic
28 + event_sources:
29 + -
30 + arn: arn:aws:sns:us-east-1:084307701560:lambda_topic
31 + test_data: input.json
32 +
...\ No newline at end of file ...\ No newline at end of file
1 +{
2 + "TableName": "snslambda",
3 + "AttributeDefinitions": [
4 + {
5 + "AttributeName": "SnsTopicArn",
6 + "AttributeType": "S"
7 + },
8 + {
9 + "AttributeName": "SnsPublishTime",
10 + "AttributeType": "S"
11 + },
12 + {
13 + "AttributeName": "SnsMessageId",
14 + "AttributeType": "S"
15 + }
16 + ],
17 + "KeySchema": [
18 + {
19 + "AttributeName": "SnsTopicArn",
20 + "KeyType": "HASH"
21 + },
22 + {
23 + "AttributeName": "SnsPublishTime",
24 + "KeyType": "RANGE"
25 + }
26 + ],
27 + "GlobalSecondaryIndexes": [
28 + {
29 + "IndexName": "MesssageIndex",
30 + "KeySchema": [
31 + {
32 + "AttributeName": "SnsMessageId",
33 + "KeyType": "HASH"
34 + }
35 + ],
36 + "Projection": {
37 + "ProjectionType": "ALL"
38 + },
39 + "ProvisionedThroughput": {
40 + "ReadCapacityUnits": 5,
41 + "WriteCapacityUnits": 1
42 + }
43 + }
44 + ],
45 + "ProvisionedThroughput": {
46 + "ReadCapacityUnits": 5,
47 + "WriteCapacityUnits": 5
48 + }
49 +}
1 +console.log('Loading event');
2 +var aws = require('aws-sdk');
3 +var ddb = new aws.DynamoDB({params: {TableName: 'snslambda'}});
4 +
5 +exports.handler = function(event, context) {
6 + var SnsMessageId = event.Records[0].Sns.MessageId;
7 + var SnsPublishTime = event.Records[0].Sns.Timestamp;
8 + var SnsTopicArn = event.Records[0].Sns.TopicArn;
9 + var LambdaReceiveTime = new Date().toString();
10 + var itemParams = {Item: {SnsTopicArn: {S: SnsTopicArn},
11 + SnsPublishTime: {S: SnsPublishTime}, SnsMessageId: {S: SnsMessageId},
12 + LambdaReceiveTime: {S: LambdaReceiveTime} }};
13 + ddb.putItem(itemParams, function() {
14 + context.done(null,'');
15 + });
16 +};
1 +{
2 + "AWSTemplateFormatVersion" : "2010-09-09",
3 +
4 + "Description" : "Creates the DynamoDB Table needed for the example",
5 +
6 + "Resources" : {
7 + "snslambda" : {
8 + "Type" : "AWS::DynamoDB::Table",
9 + "Properties" : {
10 + "AttributeDefinitions": [
11 + {
12 + "AttributeName" : "SnsTopicArn",
13 + "AttributeType" : "S"
14 + },
15 + {
16 + "AttributeName" : "SnsPublishTime",
17 + "AttributeType" : "S"
18 + }
19 + ],
20 + "KeySchema": [
21 + { "AttributeName": "SnsTopicArn", "KeyType": "HASH" },
22 + { "AttributeName": "SnsPublishTime", "KeyType": "RANGE" }
23 + ],
24 + "ProvisionedThroughput" : {
25 + "ReadCapacityUnits" : 5,
26 + "WriteCapacityUnits" : 5
27 + }
28 + }
29 + }
30 + },
31 +
32 + "Outputs" : {
33 + "TableName" : {
34 + "Value" : {"Ref" : "snslambda"},
35 + "Description" : "Table name of the newly created DynamoDB table"
36 + }
37 + }
38 +}
...@@ -5,8 +5,8 @@ from setuptools import setup, find_packages ...@@ -5,8 +5,8 @@ from setuptools import setup, find_packages
5 import os 5 import os
6 6
7 requires = [ 7 requires = [
8 - 'botocore==0.94.0', 8 + 'boto3==0.0.16',
9 - 'click==3.3', 9 + 'click==4.0',
10 'PyYAML>=3.11' 10 'PyYAML>=3.11'
11 ] 11 ]
12 12
......
1 +{
2 + "Statement":[
3 + {"Condition":
4 + {"ArnLike":{"AWS:SourceArn":"arn:aws:sns:us-east-1:123456789012:lambda_topic"}},
5 + "Resource":"arn:aws:lambda:us-east-1:123456789023:function:messageStore",
6 + "Action":"lambda:invokeFunction",
7 + "Principal":{"Service":"sns.amazonaws.com"},
8 + "Sid":"sns invoke","Effect":"Allow"
9 + }],
10 + "Id":"default",
11 + "Version":"2012-10-17"
12 +}
1 +import inspect
2 +
1 import mock 3 import mock
2 4
3 import tests.unit.responses as responses 5 import tests.unit.responses as responses
...@@ -6,40 +8,23 @@ import tests.unit.responses as responses ...@@ -6,40 +8,23 @@ import tests.unit.responses as responses
6 class MockAWS(object): 8 class MockAWS(object):
7 9
8 def __init__(self, profile=None, region=None): 10 def __init__(self, profile=None, region=None):
9 - pass 11 + self.response_map = {}
12 + for name, value in inspect.getmembers(responses):
13 + if name.startswith('__'):
14 + continue
15 + if '_' in name:
16 + service_name, request_name = name.split('_', 1)
17 + if service_name not in self.response_map:
18 + self.response_map[service_name] = {}
19 + self.response_map[service_name][request_name] = value
10 20
11 def create_client(self, client_name): 21 def create_client(self, client_name):
12 client = None 22 client = None
13 - if client_name == 'logs': 23 + if client_name in self.response_map:
14 - client = mock.Mock()
15 - choices = responses.logs_describe_log_groups
16 - client.describe_log_groups = mock.Mock(
17 - side_effect=choices)
18 - choices = responses.logs_describe_log_streams
19 - client.describe_log_streams = mock.Mock(
20 - side_effect=choices)
21 - choices = responses.logs_get_log_events
22 - client.get_log_events = mock.Mock(
23 - side_effect=choices)
24 - if client_name == 'cloudformation':
25 - client = mock.Mock()
26 - choices = responses.cfn_list_stack_resources
27 - client.list_stack_resources = mock.Mock(
28 - side_effect=choices)
29 - choices = responses.cfn_describe_stacks
30 - client.describe_stacks = mock.Mock(
31 - side_effect=choices)
32 - choices = responses.cfn_create_stack
33 - client.create_stack = mock.Mock(
34 - side_effect=choices)
35 - choices = responses.cfn_delete_stack
36 - client.delete_stack = mock.Mock(
37 - side_effect=choices)
38 - if client_name == 'iam':
39 client = mock.Mock() 24 client = mock.Mock()
40 - choices = responses.iam_get_role 25 + for request in self.response_map[client_name]:
41 - client.get_role = mock.Mock( 26 + response = self.response_map[client_name][request]
42 - side_effect=choices) 27 + setattr(client, request, mock.Mock(side_effect=response))
43 return client 28 return client
44 29
45 30
......
1 import datetime 1 import datetime
2 from dateutil.tz import tzutc 2 from dateutil.tz import tzutc
3 3
4 -cfn_list_stack_resources = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'dd35f0ef-9699-11e4-ba38-c355c9515dbc'}, u'StackResourceSummaries': [{u'ResourceStatus': 'CREATE_COMPLETE', u'ResourceType': 'AWS::IAM::Role', u'ResourceStatusReason': None, u'LastUpdatedTimestamp': datetime.datetime(2015, 1, 6, 17, 37, 54, 861000, tzinfo=tzutc()), u'PhysicalResourceId': 'TestKinesis-InvokeRole-IF6VUXY9MBJN', u'LogicalResourceId': 'InvokeRole'}, {u'ResourceStatus': 'CREATE_COMPLETE', u'ResourceType': 'AWS::IAM::Role', u'ResourceStatusReason': None, u'LastUpdatedTimestamp': datetime.datetime(2015, 1, 6, 17, 37, 55, 18000, tzinfo=tzutc()), u'PhysicalResourceId': 'TestKinesis-ExecRole-567SAV6TZOET', u'LogicalResourceId': 'ExecRole'}, {u'ResourceStatus': 'CREATE_COMPLETE', u'ResourceType': 'AWS::IAM::Policy', u'ResourceStatusReason': None, u'LastUpdatedTimestamp': datetime.datetime(2015, 1, 6, 17, 37, 58, 120000, tzinfo=tzutc()), u'PhysicalResourceId': 'TestK-Invo-OMW5SDLQM8FM', u'LogicalResourceId': 'InvokeRolePolicies'}, {u'ResourceStatus': 'CREATE_COMPLETE', u'ResourceType': 'AWS::IAM::Policy', u'ResourceStatusReason': None, u'LastUpdatedTimestamp': datetime.datetime(2015, 1, 6, 17, 37, 58, 454000, tzinfo=tzutc()), u'PhysicalResourceId': 'TestK-Exec-APWRVKTBPPPT', u'LogicalResourceId': 'ExecRolePolicies'}]}] 4 +iam_list_policies = [{u'IsTruncated': True,
5 + u'Marker': 'ABcyoYmSlphARcitCJruhVIxKW3Hg1LJD3Fm4LAW8iGKykrSNrApiUoz2rjIuNiLJpT6JtUgP5M7wTuPZcHu1KsvMarvgFBFQObTPSa4WF22Zg==',
6 + u'Policies': [{u'Arn': 'arn:aws:iam::123456789012:policy/FooPolicy',
7 + u'AttachmentCount': 0,
8 + u'CreateDate': datetime.datetime(2015, 2, 24, 3, 16, 24, tzinfo=tzutc()),
9 + u'DefaultVersionId': 'v2',
10 + u'IsAttachable': True,
11 + u'Path': '/',
12 + u'PolicyId': 'ANPAJHWE6R7YT7PLAH3KG',
13 + u'PolicyName': 'FooPolicy',
14 + u'UpdateDate': datetime.datetime(2015, 2, 25, 0, 19, 12, tzinfo=tzutc())},
15 + {u'Arn': 'arn:aws:iam::123456789012:policy/BarPolicy',
16 + u'AttachmentCount': 1,
17 + u'CreateDate': datetime.datetime(2015, 2, 25, 0, 11, 57, tzinfo=tzutc()),
18 + u'DefaultVersionId': 'v2',
19 + u'IsAttachable': True,
20 + u'Path': '/',
21 + u'PolicyId': 'ANPAJU7MVBQXOQTVQN3VM',
22 + u'PolicyName': 'BarPolicy',
23 + u'UpdateDate': datetime.datetime(2015, 2, 25, 0, 13, 8, tzinfo=tzutc())},
24 + {u'Arn': 'arn:aws:iam::123456789012:policy/FiePolicy',
25 + u'AttachmentCount': 1,
26 + u'CreateDate': datetime.datetime(2015, 3, 21, 19, 18, 21, tzinfo=tzutc()),
27 + u'DefaultVersionId': 'v4',
28 + u'IsAttachable': True,
29 + u'Path': '/',
30 + u'PolicyId': 'ANPAIXQ72B2OH2RZPYQ4Y',
31 + u'PolicyName': 'FiePolicy',
32 + u'UpdateDate': datetime.datetime(2015, 3, 26, 23, 26, 52, tzinfo=tzutc())}],
33 +'ResponseMetadata': {'HTTPStatusCode': 200,
34 + 'RequestId': '4e87c995-ecf2-11e4-bb10-51f1499b3162'}}]
35 +
36 +iam_create_policy = [{u'Policy': {u'PolicyName': 'LambdaChatDynamoDBPolicy', u'CreateDate': datetime.datetime(2015, 4, 27, 12, 13, 35, 240000, tzinfo=tzutc()), u'AttachmentCount': 0, u'IsAttachable': True, u'PolicyId': 'ANPAISQNU4EPZZDVZUOKU', u'DefaultVersionId': 'v1', u'Path': '/kappa/', u'Arn': 'arn:aws:iam::658794617753:policy/kappa/LambdaChatDynamoDBPolicy', u'UpdateDate': datetime.datetime(2015, 4, 27, 12, 13, 35, 240000, tzinfo=tzutc())}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'd403e95f-ecd6-11e4-9ee0-15e8b71db930'}}]
37 +
38 +iam_list_roles = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'd41415ff-ecd6-11e4-bb10-51f1499b3162'}, u'IsTruncated': False, u'Roles': [{u'AssumeRolePolicyDocument': {u'Version': u'2012-10-17', u'Statement': [{u'Action': u'sts:AssumeRole', u'Principal': {u'Service': u'lambda.amazonaws.com'}, u'Effect': u'Allow', u'Sid': u''}]}, u'RoleId': 'AROAJ4JSNL3M4UYI6GDYS', u'CreateDate': datetime.datetime(2015, 4, 27, 11, 59, 19, tzinfo=tzutc()), u'RoleName': 'FooRole', u'Path': '/kappa/', u'Arn': 'arn:aws:iam::123456789012:role/kappa/FooRole'}]}]
39 +
40 +iam_create_role = [{u'Role': {u'AssumeRolePolicyDocument': {u'Version': u'2012-10-17', u'Statement': [{u'Action': [u'sts:AssumeRole'], u'Effect': u'Allow', u'Principal': {u'Service': [u'lambda.amazonaws.com']}}]}, u'RoleId': 'AROAIT2ZRRPQBOIBBHPZU', u'CreateDate': datetime.datetime(2015, 4, 27, 12, 13, 35, 426000, tzinfo=tzutc()), u'RoleName': 'BazRole', u'Path': '/kappa/', u'Arn': 'arn:aws:iam::123456789012:role/kappa/BazRole'}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'd41fd55c-ecd6-11e4-9fd8-03ee0021e940'}}]
5 41
6 iam_get_role = [{u'Role': {u'AssumeRolePolicyDocument': {u'Version': u'2012-10-17', u'Statement': [{u'Action': u'sts:AssumeRole', u'Principal': {u'Service': u's3.amazonaws.com'}, u'Effect': u'Allow', u'Condition': {u'ArnLike': {u'sts:ExternalId': u'arn:aws:s3:::*'}}, u'Sid': u''}, {u'Action': u'sts:AssumeRole', u'Principal': {u'Service': u'lambda.amazonaws.com'}, u'Effect': u'Allow', u'Sid': u''}]}, u'RoleId': 'AROAIEVJHUJG2I4MG5PSC', u'CreateDate': datetime.datetime(2015, 1, 6, 17, 37, 44, tzinfo=tzutc()), u'RoleName': 'TestKinesis-InvokeRole-IF6VUXY9MBJN', u'Path': '/', u'Arn': 'arn:aws:iam::0123456789012:role/TestKinesis-InvokeRole-FOO'}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'dd6e8d42-9699-11e4-afe6-d3625e8b365b'}}] 42 iam_get_role = [{u'Role': {u'AssumeRolePolicyDocument': {u'Version': u'2012-10-17', u'Statement': [{u'Action': u'sts:AssumeRole', u'Principal': {u'Service': u's3.amazonaws.com'}, u'Effect': u'Allow', u'Condition': {u'ArnLike': {u'sts:ExternalId': u'arn:aws:s3:::*'}}, u'Sid': u''}, {u'Action': u'sts:AssumeRole', u'Principal': {u'Service': u'lambda.amazonaws.com'}, u'Effect': u'Allow', u'Sid': u''}]}, u'RoleId': 'AROAIEVJHUJG2I4MG5PSC', u'CreateDate': datetime.datetime(2015, 1, 6, 17, 37, 44, tzinfo=tzutc()), u'RoleName': 'TestKinesis-InvokeRole-IF6VUXY9MBJN', u'Path': '/', u'Arn': 'arn:aws:iam::0123456789012:role/TestKinesis-InvokeRole-FOO'}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'dd6e8d42-9699-11e4-afe6-d3625e8b365b'}}]
7 43
44 +iam_attach_role_policy = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'd43e32dc-ecd6-11e4-9fd8-03ee0021e940'}}]
45 +
46 +iam_detach_role_policy = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'a7d30b51-ecd6-11e4-bbe4-d996b8ad5d9e'}}]
47 +
48 +iam_delete_role = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'a7e5a97e-ecd6-11e4-ae9e-6dee7bf37e66'}}]
49 +
50 +lambda_create_function = [{u'FunctionName': u'LambdaChatDynamoDB', 'ResponseMetadata': {'HTTPStatusCode': 201, 'RequestId': 'd7840efb-ecd6-11e4-b8b0-f7f3177894e9'}, u'CodeSize': 22024, u'MemorySize': 128, u'FunctionArn': u'arn:aws:lambda:us-east-1:123456789012:function:FooBarFunction', u'Handler': u'FooBarFunction.handler', u'Role': u'arn:aws:iam::123456789012:role/kappa/BazRole', u'Timeout': 3, u'LastModified': u'2015-04-27T12:13:41.147+0000', u'Runtime': u'nodejs', u'Description': u'A FooBar function'}]
51 +
52 +lambda_delete_function = [{'ResponseMetadata': {'HTTPStatusCode': 204, 'RequestId': 'a499b2c2-ecd6-11e4-8d2a-77b7e55836e7'}}]
53 +
8 logs_describe_log_groups = [{'ResponseMetadata': {'HTTPStatusCode': 200, 54 logs_describe_log_groups = [{'ResponseMetadata': {'HTTPStatusCode': 200,
9 'RequestId': 'da962431-afed-11e4-8c17-1776597471e6'}, 55 'RequestId': 'da962431-afed-11e4-8c17-1776597471e6'},
10 u'logGroups': [{u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample*', 56 u'logGroups': [{u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample*',
...@@ -23,13 +69,3 @@ logs_describe_log_groups = [{'ResponseMetadata': {'HTTPStatusCode': 200, ...@@ -23,13 +69,3 @@ logs_describe_log_groups = [{'ResponseMetadata': {'HTTPStatusCode': 200,
23 logs_describe_log_streams = [{u'logStreams': [{u'firstEventTimestamp': 1417042749449, u'lastEventTimestamp': 1417042749547, u'creationTime': 1417042748263, u'uploadSequenceToken': u'49540114640150833041490484409222729829873988799393975922', u'logStreamName': u'1cc48e4e613246b7974094323259d600', u'lastIngestionTime': 1417042750483, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:1cc48e4e613246b7974094323259d600', u'storedBytes': 712}, {u'firstEventTimestamp': 1417272406988, u'lastEventTimestamp': 1417272407088, u'creationTime': 1417272405690, u'uploadSequenceToken': u'49540113907504451034164105858363493278561872472363261986', u'logStreamName': u'2782a5ff88824c85a9639480d1ed7bbe', u'lastIngestionTime': 1417272408043, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:2782a5ff88824c85a9639480d1ed7bbe', u'storedBytes': 712}, {u'firstEventTimestamp': 1420569035842, u'lastEventTimestamp': 1420569035941, u'creationTime': 1420569034614, u'uploadSequenceToken': u'49540113907883563702539166025438885323514410026454245426', u'logStreamName': u'2d62991a479b4ebf9486176122b72a55', u'lastIngestionTime': 1420569036909, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:2d62991a479b4ebf9486176122b72a55', u'storedBytes': 709}, {u'firstEventTimestamp': 1418244027421, u'lastEventTimestamp': 1418244027541, u'creationTime': 1418244026907, u'uploadSequenceToken': u'49540113964795065449189116778452984186276757901477438642', u'logStreamName': u'4f44ffa128d6405591ca83b2b0f9dd2d', u'lastIngestionTime': 1418244028484, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:4f44ffa128d6405591ca83b2b0f9dd2d', u'storedBytes': 1010}, {u'firstEventTimestamp': 1418242565524, u'lastEventTimestamp': 1418242565641, u'creationTime': 1418242564196, u'uploadSequenceToken': u'49540113095132904942090446312687285178819573422397343074', u'logStreamName': u'69c5ac87e7e6415985116e8cb44e538e', u'lastIngestionTime': 1418242566558, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:69c5ac87e7e6415985116e8cb44e538e', u'storedBytes': 713}, {u'firstEventTimestamp': 1417213193378, u'lastEventTimestamp': 1417213193478, u'creationTime': 1417213192095, u'uploadSequenceToken': u'49540113336360065754596187770479764234792559857643841394', u'logStreamName': u'f68e3d87b8a14cdba338f6926f7cf50a', u'lastIngestionTime': 1417213194421, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:f68e3d87b8a14cdba338f6926f7cf50a', u'storedBytes': 711}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '2a6d4941-969b-11e4-947f-19d1c72ede7e'}}] 69 logs_describe_log_streams = [{u'logStreams': [{u'firstEventTimestamp': 1417042749449, u'lastEventTimestamp': 1417042749547, u'creationTime': 1417042748263, u'uploadSequenceToken': u'49540114640150833041490484409222729829873988799393975922', u'logStreamName': u'1cc48e4e613246b7974094323259d600', u'lastIngestionTime': 1417042750483, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:1cc48e4e613246b7974094323259d600', u'storedBytes': 712}, {u'firstEventTimestamp': 1417272406988, u'lastEventTimestamp': 1417272407088, u'creationTime': 1417272405690, u'uploadSequenceToken': u'49540113907504451034164105858363493278561872472363261986', u'logStreamName': u'2782a5ff88824c85a9639480d1ed7bbe', u'lastIngestionTime': 1417272408043, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:2782a5ff88824c85a9639480d1ed7bbe', u'storedBytes': 712}, {u'firstEventTimestamp': 1420569035842, u'lastEventTimestamp': 1420569035941, u'creationTime': 1420569034614, u'uploadSequenceToken': u'49540113907883563702539166025438885323514410026454245426', u'logStreamName': u'2d62991a479b4ebf9486176122b72a55', u'lastIngestionTime': 1420569036909, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:2d62991a479b4ebf9486176122b72a55', u'storedBytes': 709}, {u'firstEventTimestamp': 1418244027421, u'lastEventTimestamp': 1418244027541, u'creationTime': 1418244026907, u'uploadSequenceToken': u'49540113964795065449189116778452984186276757901477438642', u'logStreamName': u'4f44ffa128d6405591ca83b2b0f9dd2d', u'lastIngestionTime': 1418244028484, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:4f44ffa128d6405591ca83b2b0f9dd2d', u'storedBytes': 1010}, {u'firstEventTimestamp': 1418242565524, u'lastEventTimestamp': 1418242565641, u'creationTime': 1418242564196, u'uploadSequenceToken': u'49540113095132904942090446312687285178819573422397343074', u'logStreamName': u'69c5ac87e7e6415985116e8cb44e538e', u'lastIngestionTime': 1418242566558, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:69c5ac87e7e6415985116e8cb44e538e', u'storedBytes': 713}, {u'firstEventTimestamp': 1417213193378, u'lastEventTimestamp': 1417213193478, u'creationTime': 1417213192095, u'uploadSequenceToken': u'49540113336360065754596187770479764234792559857643841394', u'logStreamName': u'f68e3d87b8a14cdba338f6926f7cf50a', u'lastIngestionTime': 1417213194421, u'arn': u'arn:aws:logs:us-east-1:0123456789012:log-group:/aws/lambda/KinesisSample:log-stream:f68e3d87b8a14cdba338f6926f7cf50a', u'storedBytes': 711}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '2a6d4941-969b-11e4-947f-19d1c72ede7e'}}]
24 70
25 logs_get_log_events = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '2a7deb71-969b-11e4-914b-8f1f3d7b023d'}, u'nextForwardToken': u'f/31679748107442531967654742688057700554200447759088287749', u'events': [{u'ingestionTime': 1420569036909, u'timestamp': 1420569035842, u'message': u'2015-01-06T18:30:35.841Z\tko2sss03iq7l2pdk\tLoading event\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035899, u'message': u'START RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035940, u'message': u'2015-01-06T18:30:35.940Z\t23007242-95d2-11e4-a10e-7b2ab60a7770\t{\n "Records": [\n {\n "kinesis": {\n "partitionKey": "partitionKey-3",\n "kinesisSchemaVersion": "1.0",\n "data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=",\n "sequenceNumber": "49545115243490985018280067714973144582180062593244200961"\n },\n "eventSource": "aws:kinesis",\n "eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961",\n "invokeIdentityArn": "arn:aws:iam::0123456789012:role/testLEBRole",\n "eventVersion": "1.0",\n "eventName": "aws:kinesis:record",\n "eventSourceARN": "arn:aws:kinesis:us-east-1:35667example:stream/examplestream",\n "awsRegion": "us-east-1"\n }\n ]\n}\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035940, u'message': u'2015-01-06T18:30:35.940Z\t23007242-95d2-11e4-a10e-7b2ab60a7770\tDecoded payload: Hello, this is a test 123.\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035941, u'message': u'END RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035941, u'message': u'REPORT RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770\tDuration: 98.51 ms\tBilled Duration: 100 ms \tMemory Size: 128 MB\tMax Memory Used: 26 MB\t\n'}], u'nextBackwardToken': u'b/31679748105234758193000210997045664445208259969996226560'}] 71 logs_get_log_events = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '2a7deb71-969b-11e4-914b-8f1f3d7b023d'}, u'nextForwardToken': u'f/31679748107442531967654742688057700554200447759088287749', u'events': [{u'ingestionTime': 1420569036909, u'timestamp': 1420569035842, u'message': u'2015-01-06T18:30:35.841Z\tko2sss03iq7l2pdk\tLoading event\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035899, u'message': u'START RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035940, u'message': u'2015-01-06T18:30:35.940Z\t23007242-95d2-11e4-a10e-7b2ab60a7770\t{\n "Records": [\n {\n "kinesis": {\n "partitionKey": "partitionKey-3",\n "kinesisSchemaVersion": "1.0",\n "data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=",\n "sequenceNumber": "49545115243490985018280067714973144582180062593244200961"\n },\n "eventSource": "aws:kinesis",\n "eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961",\n "invokeIdentityArn": "arn:aws:iam::0123456789012:role/testLEBRole",\n "eventVersion": "1.0",\n "eventName": "aws:kinesis:record",\n "eventSourceARN": "arn:aws:kinesis:us-east-1:35667example:stream/examplestream",\n "awsRegion": "us-east-1"\n }\n ]\n}\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035940, u'message': u'2015-01-06T18:30:35.940Z\t23007242-95d2-11e4-a10e-7b2ab60a7770\tDecoded payload: Hello, this is a test 123.\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035941, u'message': u'END RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770\n'}, {u'ingestionTime': 1420569036909, u'timestamp': 1420569035941, u'message': u'REPORT RequestId: 23007242-95d2-11e4-a10e-7b2ab60a7770\tDuration: 98.51 ms\tBilled Duration: 100 ms \tMemory Size: 128 MB\tMax Memory Used: 26 MB\t\n'}], u'nextBackwardToken': u'b/31679748105234758193000210997045664445208259969996226560'}]
26 -
27 -cfn_describe_stacks = [
28 - {u'Stacks': [{u'StackId': 'arn:aws:cloudformation:us-east-1:084307701560:stack/TestKinesis/7c4ae730-96b8-11e4-94cc-5001dc3ed8d2', u'Description': None, u'Tags': [], u'StackStatusReason': 'User Initiated', u'CreationTime': datetime.datetime(2015, 1, 7, 21, 59, 43, 208000, tzinfo=tzutc()), u'Capabilities': ['CAPABILITY_IAM'], u'StackName': 'TestKinesis', u'NotificationARNs': [], u'StackStatus': 'CREATE_IN_PROGRESS', u'DisableRollback': False}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '7d66debd-96b8-11e4-a647-4f4741ffff69'}},
29 - {u'Stacks': [{u'StackId': 'arn:aws:cloudformation:us-east-1:084307701560:stack/TestKinesis/7c4ae730-96b8-11e4-94cc-5001dc3ed8d2', u'Description': None, u'Tags': [], u'StackStatusReason': 'User Initiated', u'CreationTime': datetime.datetime(2015, 1, 7, 21, 59, 43, 208000, tzinfo=tzutc()), u'Capabilities': ['CAPABILITY_IAM'], u'StackName': 'TestKinesis', u'NotificationARNs': [], u'StackStatus': 'CREATE_IN_PROGRESS', u'DisableRollback': False}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '7e36fff7-96b8-11e4-af44-6350f4f8c2ae'}},
30 - {u'Stacks': [{u'StackId': 'arn:aws:cloudformation:us-east-1:084307701560:stack/TestKinesis/7c4ae730-96b8-11e4-94cc-5001dc3ed8d2', u'Description': None, u'Tags': [], u'StackStatusReason': 'User Initiated', u'CreationTime': datetime.datetime(2015, 1, 7, 21, 59, 43, 208000, tzinfo=tzutc()), u'Capabilities': ['CAPABILITY_IAM'], u'StackName': 'TestKinesis', u'NotificationARNs': [], u'StackStatus': 'CREATE_IN_PROGRESS', u'DisableRollback': False}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '7ef03e10-96b8-11e4-bc86-7f67e11abcfa'}},
31 - {u'Stacks': [{u'StackId': 'arn:aws:cloudformation:us-east-1:084307701560:stack/TestKinesis/7c4ae730-96b8-11e4-94cc-5001dc3ed8d2', u'Description': None, u'Tags': [], u'StackStatusReason': None, u'CreationTime': datetime.datetime(2015, 1, 7, 21, 59, 43, 208000, tzinfo=tzutc()), u'Capabilities': ['CAPABILITY_IAM'], u'StackName': 'TestKinesis', u'NotificationARNs': [], u'StackStatus': 'CREATE_COMPLETE', u'DisableRollback': False}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '8c2bff8e-96b8-11e4-be70-c5ad82c32f2d'}}]
32 -
33 -cfn_create_stack = [{u'StackId': 'arn:aws:cloudformation:us-east-1:084307701560:stack/TestKinesis/7c4ae730-96b8-11e4-94cc-5001dc3ed8d2', 'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': '7c2f2260-96b8-11e4-be70-c5ad82c32f2d'}}]
34 -
35 -cfn_delete_stack = [{'ResponseMetadata': {'HTTPStatusCode': 200, 'RequestId': 'f19af5b8-96bc-11e4-860e-11ba752b58a9'}}]
......
1 +# Copyright (c) 2015 Mitch Garnaat http://garnaat.org/
2 +#
3 +# Licensed under the Apache License, Version 2.0 (the "License"). You
4 +# may not use this file except in compliance with the License. A copy of
5 +# the License is located at
6 +#
7 +# http://aws.amazon.com/apache2.0/
8 +#
9 +# or in the "license" file accompanying this file. This file is
10 +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 +# ANY KIND, either express or implied. See the License for the specific
12 +# language governing permissions and limitations under the License.
13 +
14 +import unittest
15 +import os
16 +
17 +import mock
18 +
19 +from kappa.policy import Policy
20 +from tests.unit.mock_aws import get_aws
21 +
22 +Config1 = {
23 + 'name': 'FooPolicy',
24 + 'description': 'This is the Foo policy',
25 + 'document': 'FooPolicy.json'}
26 +
27 +Config2 = {
28 + 'name': 'BazPolicy',
29 + 'description': 'This is the Baz policy',
30 + 'document': 'BazPolicy.json'}
31 +
32 +
33 +def path(filename):
34 + return os.path.join(os.path.dirname(__file__), 'data', filename)
35 +
36 +
37 +class TestPolicy(unittest.TestCase):
38 +
39 + def setUp(self):
40 + self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws)
41 + self.mock_aws = self.aws_patch.start()
42 + Config1['document'] = path(Config1['document'])
43 + Config2['document'] = path(Config2['document'])
44 +
45 + def tearDown(self):
46 + self.aws_patch.stop()
47 +
48 + def test_properties(self):
49 + mock_context = mock.Mock()
50 + policy = Policy(mock_context, Config1)
51 + self.assertEqual(policy.name, Config1['name'])
52 + self.assertEqual(policy.document, Config1['document'])
53 + self.assertEqual(policy.description, Config1['description'])
54 +
55 + def test_exists(self):
56 + mock_context = mock.Mock()
57 + policy = Policy(mock_context, Config1)
58 + self.assertTrue(policy.exists())
59 +
60 + def test_not_exists(self):
61 + mock_context = mock.Mock()
62 + policy = Policy(mock_context, Config2)
63 + self.assertFalse(policy.exists())
64 +
65 + def test_create(self):
66 + mock_context = mock.Mock()
67 + policy = Policy(mock_context, Config2)
68 + policy.create()
69 +
70 + def test_delete(self):
71 + mock_context = mock.Mock()
72 + policy = Policy(mock_context, Config1)
73 + policy.delete()
...@@ -12,56 +12,47 @@ ...@@ -12,56 +12,47 @@
12 # language governing permissions and limitations under the License. 12 # language governing permissions and limitations under the License.
13 13
14 import unittest 14 import unittest
15 -import os
16 15
17 import mock 16 import mock
18 17
19 -from kappa.stack import Stack 18 +from kappa.role import Role
20 from tests.unit.mock_aws import get_aws 19 from tests.unit.mock_aws import get_aws
21 20
22 -Config = { 21 +Config1 = {'name': 'FooRole'}
23 - 'template': 'roles.cf',
24 - 'stack_name': 'FooBar',
25 - 'exec_role': 'ExecRole',
26 - 'invoke_role': 'InvokeRole'}
27 22
23 +Config2 = {'name': 'BazRole'}
28 24
29 -def path(filename):
30 - return os.path.join(os.path.dirname(__file__), 'data', filename)
31 25
32 - 26 +class TestRole(unittest.TestCase):
33 -class TestStack(unittest.TestCase):
34 27
35 def setUp(self): 28 def setUp(self):
36 self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws) 29 self.aws_patch = mock.patch('kappa.aws.get_aws', get_aws)
37 self.mock_aws = self.aws_patch.start() 30 self.mock_aws = self.aws_patch.start()
38 - Config['template'] = path(Config['template'])
39 31
40 def tearDown(self): 32 def tearDown(self):
41 self.aws_patch.stop() 33 self.aws_patch.stop()
42 34
43 def test_properties(self): 35 def test_properties(self):
44 mock_context = mock.Mock() 36 mock_context = mock.Mock()
45 - stack = Stack(mock_context, Config) 37 + role = Role(mock_context, Config1)
46 - self.assertEqual(stack.name, Config['stack_name']) 38 + self.assertEqual(role.name, Config1['name'])
47 - self.assertEqual(stack.template_path, Config['template'])
48 - self.assertEqual(stack.exec_role, Config['exec_role'])
49 - self.assertEqual(stack.invoke_role, Config['invoke_role'])
50 - self.assertEqual(
51 - stack.invoke_role_arn,
52 - 'arn:aws:iam::0123456789012:role/TestKinesis-InvokeRole-FOO')
53 39
54 def test_exists(self): 40 def test_exists(self):
55 mock_context = mock.Mock() 41 mock_context = mock.Mock()
56 - stack = Stack(mock_context, Config) 42 + role = Role(mock_context, Config1)
57 - self.assertTrue(stack.exists()) 43 + self.assertTrue(role.exists())
44 +
45 + def test_not_exists(self):
46 + mock_context = mock.Mock()
47 + role = Role(mock_context, Config2)
48 + self.assertFalse(role.exists())
58 49
59 - def test_update(self): 50 + def test_create(self):
60 mock_context = mock.Mock() 51 mock_context = mock.Mock()
61 - stack = Stack(mock_context, Config) 52 + role = Role(mock_context, Config2)
62 - stack.update() 53 + role.create()
63 54
64 def test_delete(self): 55 def test_delete(self):
65 mock_context = mock.Mock() 56 mock_context = mock.Mock()
66 - stack = Stack(mock_context, Config) 57 + role = Role(mock_context, Config1)
67 - stack.delete() 58 + role.delete()
......