Topics

New Acceptance Testing tool

Davin Hanlon
 

Hi all!

We'd like to let you know about some new capabilities that we're working on currently to improve acceptance testing of Puppet modules. Acceptance testing Puppet modules is tricky and often relies on tools such as Beaker. There are some scenarios that are difficult to test for in modules, such as running against multiple target OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It helps with acceptance testing by setting up the test environment and provides 4 main capabilities: provision a machine, install the Puppet agent, install a module and run acceptance tests. These steps are broken down into a series of rake tasks. Under the hood the tool makes heavy use of Bolt. The tool supports parallel acceptance test runs across multiple machines and running acceptance tests against localhost. It uses rake tasks rather than relying on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with our modules and open up the repo to the public. If anyone is interested in learning more about the work and providing feedback please reply to this email. I'll set up a meeting when we'll walk through the tool, request feedback and provide access to the repo for those that would like to contribute.

Please reply to this email if you're interested in helping us and I'll include you on the meeting invite!

Thanks for your time.

The Puppet Modules team

Ewoud Kohl van Wijngaarden <ewoud@...>
 

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.

Please reply to this email if you're interested in helping us and I'll
include you on the meeting invite!
That does sound very interesting.

My main issue with beaker is that it's heavily underdocumented. Essentially I always grep through the source to find out how it works.

I am very interested, but not sure if I can commit to a meeting. A meeting is not a great way to spread information. Blogs get indexed by search engines and you can easily refer others to it. Please consider writing a blog post introducing the tool. (Shout out to Henrik's https://puppet-on-the-edge.blogspot.com/ for a great example.)

Davin Hanlon
 

Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.

Let me know if you've any questions or comments!

Thanks,
Davin


On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>We'd like to let you know about some new capabilities that we're working on
>currently to improve acceptance testing of Puppet modules. Acceptance
>testing Puppet modules is tricky and often relies on tools such as Beaker
><https://github.com/puppetlabs/beaker>. There are some scenarios that are
>difficult to test for in modules, such as running against multiple target
>OSes, or just running a single acceptance test in the suite.
>
>We're working on a new tool which aims to address these shortcomings. It
>helps with acceptance testing by setting up the test environment and
>provides 4 main capabilities: provision a machine, install the Puppet
>agent, install a module and run acceptance tests. These steps are broken
>down into a series of rake tasks. Under the hood the tool makes heavy use
>of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>parallel acceptance test runs across multiple machines and running
>acceptance tests against localhost. It uses rake tasks rather than relying
>on a configuration file with a single complex CLI command.
>
>We'd love to get some feedback on the tool before we start using it with
>our modules and open up the repo to the public. If anyone is interested in
>learning more about the work and providing feedback please reply to this
>email. I'll set up a meeting when we'll walk through the tool, request
>feedback and provide access to the repo for those that would like to
>contribute.
>
>Please reply to this email if you're interested in helping us and I'll
>include you on the meeting invite!

That does sound very interesting.

My main issue with beaker is that it's heavily underdocumented.
Essentially I always grep through the source to find out how it works.

I am very interested, but not sure if I can commit to a meeting. A
meeting is not a great way to spread information. Blogs get indexed by
search engines and you can easily refer others to it. Please consider
writing a blog post introducing the tool. (Shout out to Henrik's
https://puppet-on-the-edge.blogspot.com/ for a great example.)



Martin Alfke
 

Hi Davin,

I am also interested to learn about acceptance testing using bolt.
Next week at afternoon EMEA time sounds great for me.

Best,
Martin

On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@...> wrote:

Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.

Let me know if you've any questions or comments!

Thanks,
Davin


On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.

Please reply to this email if you're interested in helping us and I'll
include you on the meeting invite!
That does sound very interesting.

My main issue with beaker is that it's heavily underdocumented.
Essentially I always grep through the source to find out how it works.

I am very interested, but not sure if I can commit to a meeting. A
meeting is not a great way to spread information. Blogs get indexed by
search engines and you can easily refer others to it. Please consider
writing a blog post introducing the tool. (Shout out to Henrik's
https://puppet-on-the-edge.blogspot.com/ for a great example.)



David Hollinger
 

Hi David,

I'd also be interested in learning about acceptance testing with Bolt. I'm in the Central time zone, but my time will be very limited during the day next week due to a company event. Feasibly the only days I'm available would be Monday or Tuesday

 

Hi everybody,

On 10.01.19 15:57, Martin Alfke wrote:
Hi Davin,

I am also interested to learn about acceptance testing using bolt.
Next week at afternoon EMEA time sounds great for me.
As Martin, I'm interested as well and in the same timezone.

Cheers, Tim

Best,
Martin


On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@...> wrote:

Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.

Let me know if you've any questions or comments!

Thanks,
Davin


On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.

Please reply to this email if you're interested in helping us and I'll
include you on the meeting invite!
That does sound very interesting.

My main issue with beaker is that it's heavily underdocumented.
Essentially I always grep through the source to find out how it works.

I am very interested, but not sure if I can commit to a meeting. A
meeting is not a great way to spread information. Blogs get indexed by
search engines and you can easily refer others to it. Please consider
writing a blog post introducing the tool. (Shout out to Henrik's
https://puppet-on-the-edge.blogspot.com/ for a great example.)





Rob Nelson
 

If it's not too late, please add me to the invite list. I'm in Eastern but should be able to find a way to make it work.

Rob Nelson
rnelson0@...


On Sun, Jan 13, 2019 at 4:40 PM Tim Meusel <tim@...> wrote:
Hi everybody,

On 10.01.19 15:57, Martin Alfke wrote:
> Hi Davin,
>
> I am also interested to learn about acceptance testing using bolt.
> Next week at afternoon EMEA time sounds great for me.
>

As Martin, I'm interested as well and in the same timezone.

Cheers, Tim

> Best,
> Martin
>
>
>> On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@...> wrote:
>>
>> Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.
>>
>> Let me know if you've any questions or comments!
>>
>> Thanks,
>> Davin
>>
>>
>> On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
>> On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>>> We'd like to let you know about some new capabilities that we're working on
>>> currently to improve acceptance testing of Puppet modules. Acceptance
>>> testing Puppet modules is tricky and often relies on tools such as Beaker
>>> <https://github.com/puppetlabs/beaker>. There are some scenarios that are
>>> difficult to test for in modules, such as running against multiple target
>>> OSes, or just running a single acceptance test in the suite.
>>>
>>> We're working on a new tool which aims to address these shortcomings. It
>>> helps with acceptance testing by setting up the test environment and
>>> provides 4 main capabilities: provision a machine, install the Puppet
>>> agent, install a module and run acceptance tests. These steps are broken
>>> down into a series of rake tasks. Under the hood the tool makes heavy use
>>> of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>>> parallel acceptance test runs across multiple machines and running
>>> acceptance tests against localhost. It uses rake tasks rather than relying
>>> on a configuration file with a single complex CLI command.
>>>
>>> We'd love to get some feedback on the tool before we start using it with
>>> our modules and open up the repo to the public. If anyone is interested in
>>> learning more about the work and providing feedback please reply to this
>>> email. I'll set up a meeting when we'll walk through the tool, request
>>> feedback and provide access to the repo for those that would like to
>>> contribute.
>>>
>>> Please reply to this email if you're interested in helping us and I'll
>>> include you on the meeting invite!
>>
>> That does sound very interesting.
>>
>> My main issue with beaker is that it's heavily underdocumented.
>> Essentially I always grep through the source to find out how it works.
>>
>> I am very interested, but not sure if I can commit to a meeting. A
>> meeting is not a great way to spread information. Blogs get indexed by
>> search engines and you can easily refer others to it. Please consider
>> writing a blog post introducing the tool. (Shout out to Henrik's
>> https://puppet-on-the-edge.blogspot.com/ for a great example.)
>>
>>
>>
>>
>
>
>
>




Bram Vogelaar
 

i would be very interested too, beaker has been very frustrating to use, kitchen-ci is much better to use, but is single node only by design.


On Mon, 14 Jan 2019 at 15:42, Rob Nelson <rnelson0@...> wrote:
If it's not too late, please add me to the invite list. I'm in Eastern but should be able to find a way to make it work.

Rob Nelson
rnelson0@...


On Sun, Jan 13, 2019 at 4:40 PM Tim Meusel <tim@...> wrote:
Hi everybody,

On 10.01.19 15:57, Martin Alfke wrote:
> Hi Davin,
>
> I am also interested to learn about acceptance testing using bolt.
> Next week at afternoon EMEA time sounds great for me.
>

As Martin, I'm interested as well and in the same timezone.

Cheers, Tim

> Best,
> Martin
>
>
>> On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@...> wrote:
>>
>> Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.
>>
>> Let me know if you've any questions or comments!
>>
>> Thanks,
>> Davin
>>
>>
>> On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
>> On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>>> We'd like to let you know about some new capabilities that we're working on
>>> currently to improve acceptance testing of Puppet modules. Acceptance
>>> testing Puppet modules is tricky and often relies on tools such as Beaker
>>> <https://github.com/puppetlabs/beaker>. There are some scenarios that are
>>> difficult to test for in modules, such as running against multiple target
>>> OSes, or just running a single acceptance test in the suite.
>>>
>>> We're working on a new tool which aims to address these shortcomings. It
>>> helps with acceptance testing by setting up the test environment and
>>> provides 4 main capabilities: provision a machine, install the Puppet
>>> agent, install a module and run acceptance tests. These steps are broken
>>> down into a series of rake tasks. Under the hood the tool makes heavy use
>>> of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>>> parallel acceptance test runs across multiple machines and running
>>> acceptance tests against localhost. It uses rake tasks rather than relying
>>> on a configuration file with a single complex CLI command.
>>>
>>> We'd love to get some feedback on the tool before we start using it with
>>> our modules and open up the repo to the public. If anyone is interested in
>>> learning more about the work and providing feedback please reply to this
>>> email. I'll set up a meeting when we'll walk through the tool, request
>>> feedback and provide access to the repo for those that would like to
>>> contribute.
>>>
>>> Please reply to this email if you're interested in helping us and I'll
>>> include you on the meeting invite!
>>
>> That does sound very interesting.
>>
>> My main issue with beaker is that it's heavily underdocumented.
>> Essentially I always grep through the source to find out how it works.
>>
>> I am very interested, but not sure if I can commit to a meeting. A
>> meeting is not a great way to spread information. Blogs get indexed by
>> search engines and you can easily refer others to it. Please consider
>> writing a blog post introducing the tool. (Shout out to Henrik's
>> https://puppet-on-the-edge.blogspot.com/ for a great example.)
>>
>>
>>
>>
>
>
>
>




Nick MIller
 

I think you've gotten the same feeling writing plans as I have, so I'm very interested in what ideas you have for a beaker replacement. Send an invitation my way, please! 


On Thu, Jan 10, 2019, 6:23 AM Davin Hanlon <davin.hanlon@... wrote:
Hi all!

We'd like to let you know about some new capabilities that we're working on currently to improve acceptance testing of Puppet modules. Acceptance testing Puppet modules is tricky and often relies on tools such as Beaker. There are some scenarios that are difficult to test for in modules, such as running against multiple target OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It helps with acceptance testing by setting up the test environment and provides 4 main capabilities: provision a machine, install the Puppet agent, install a module and run acceptance tests. These steps are broken down into a series of rake tasks. Under the hood the tool makes heavy use of Bolt. The tool supports parallel acceptance test runs across multiple machines and running acceptance tests against localhost. It uses rake tasks rather than relying on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with our modules and open up the repo to the public. If anyone is interested in learning more about the work and providing feedback please reply to this email. I'll set up a meeting when we'll walk through the tool, request feedback and provide access to the repo for those that would like to contribute.

Please reply to this email if you're interested in helping us and I'll include you on the meeting invite!

Thanks for your time.

The Puppet Modules team

Ewoud Kohl van Wijngaarden
 

My quick impression of the presentation/tool is that the Puppet team really understood the shortcomings of beaker and what you want to do as a module developer. It's clear to me that they're actually module developers who use this on a daily basis.

The tool itself has some clear limitations that show it's still in early development. No vagrant backend was one that multiple people noticed, but the design already has multiple backends so adding another should be fairly straight forward.

The separation of provisioning and test running was something I really liked. This means you can easily provision, test, change the module, re-test until it's finished. That also means the spec helper can be less magical. It also uses the .fixtures.yml we already use for other rspec testing so it nicely unifies the ecosystem.

Another neat feature was running tests on supported operatingsystems in metadata.json. This will need some more controls because most environments can't handle all in parallel I'm confident this could be achieved easily.

Overall it looks like the tools we module developers want, will make our lives easier and lead to higher quality modules.

What I forgot to ask: will there be people who worked on this at cfgmgmtcamp? I'd like to chat some more about it and play with it. It also means I'll need to modify my beaker talk there a bit :)

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.

Davin Hanlon
 

Thanks for the note Ewoud, and thanks to everyone that was able to join.

Everyone on the invite should have access to the repo now: https://github.com/puppetlabs/solid-waffle. If not, drop me a note with your GitHub ID and I'll add you. We're going through a process internally to rename the tool, and once that's done we'll update the name everywhere, and then make the repo public.

Unfortunately, neither TP nor Helen are at cfgmgmtcamp this year (David Schmitt will be there but, while he sits next to both TP and Helen, he hasn't been involved in this effort to date).

For now, the best place to raise queries/concerns/enhancements is GitHub issues - we'll do our best to get back asap. Hopefully that works for everyone.

Let me know if you've any questions.

Thanks,
Davin

On Thu, 24 Jan 2019 at 17:24, Ewoud Kohl van Wijngaarden <ewoud+voxpupuli@...> wrote:
My quick impression of the presentation/tool is that the Puppet team
really understood the shortcomings of beaker and what you want to do as
a module developer. It's clear to me that they're actually module
developers who use this on a daily basis.

The tool itself has some clear limitations that show it's still in early
development. No vagrant backend was one that multiple people noticed,
but the design already has multiple backends so adding another should be
fairly straight forward.

The separation of provisioning and test running was something I really
liked. This means you can easily provision, test, change the module,
re-test until it's finished. That also means the spec helper can be less
magical. It also uses the .fixtures.yml we already use for other rspec
testing so it nicely unifies the ecosystem.

Another neat feature was running tests on supported operatingsystems in
metadata.json. This will need some more controls because most
environments can't handle all in parallel I'm confident this could be
achieved easily.

Overall it looks like the tools we module developers want, will make our
lives easier and lead to higher quality modules.

What I forgot to ask: will there be people who worked on this at
cfgmgmtcamp? I'd like to chat some more about it and play with it. It
also means I'll need to modify my beaker talk there a bit :)

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>We'd like to let you know about some new capabilities that we're working on
>currently to improve acceptance testing of Puppet modules. Acceptance
>testing Puppet modules is tricky and often relies on tools such as Beaker
><https://github.com/puppetlabs/beaker>. There are some scenarios that are
>difficult to test for in modules, such as running against multiple target
>OSes, or just running a single acceptance test in the suite.
>
>We're working on a new tool which aims to address these shortcomings. It
>helps with acceptance testing by setting up the test environment and
>provides 4 main capabilities: provision a machine, install the Puppet
>agent, install a module and run acceptance tests. These steps are broken
>down into a series of rake tasks. Under the hood the tool makes heavy use
>of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>parallel acceptance test runs across multiple machines and running
>acceptance tests against localhost. It uses rake tasks rather than relying
>on a configuration file with a single complex CLI command.
>
>We'd love to get some feedback on the tool before we start using it with
>our modules and open up the repo to the public. If anyone is interested in
>learning more about the work and providing feedback please reply to this
>email. I'll set up a meeting when we'll walk through the tool, request
>feedback and provide access to the repo for those that would like to
>contribute.



 

> Everyone on the invite should have access to the repo now: https://github.com/puppetlabs/solid-waffle.

Not me, I get a 404 for that.  Sorry I couldn't make the meeting, but 7am is crazy time in my household ;-)

On Fri, Jan 25, 2019 at 3:32 AM Davin Hanlon <davin.hanlon@...> wrote:
Thanks for the note Ewoud, and thanks to everyone that was able to join.

Everyone on the invite should have access to the repo now: https://github.com/puppetlabs/solid-waffle. If not, drop me a note with your GitHub ID and I'll add you. We're going through a process internally to rename the tool, and once that's done we'll update the name everywhere, and then make the repo public.

Unfortunately, neither TP nor Helen are at cfgmgmtcamp this year (David Schmitt will be there but, while he sits next to both TP and Helen, he hasn't been involved in this effort to date).

For now, the best place to raise queries/concerns/enhancements is GitHub issues - we'll do our best to get back asap. Hopefully that works for everyone.

Let me know if you've any questions.

Thanks,
Davin

On Thu, 24 Jan 2019 at 17:24, Ewoud Kohl van Wijngaarden <ewoud+voxpupuli@...> wrote:
My quick impression of the presentation/tool is that the Puppet team
really understood the shortcomings of beaker and what you want to do as
a module developer. It's clear to me that they're actually module
developers who use this on a daily basis.

The tool itself has some clear limitations that show it's still in early
development. No vagrant backend was one that multiple people noticed,
but the design already has multiple backends so adding another should be
fairly straight forward.

The separation of provisioning and test running was something I really
liked. This means you can easily provision, test, change the module,
re-test until it's finished. That also means the spec helper can be less
magical. It also uses the .fixtures.yml we already use for other rspec
testing so it nicely unifies the ecosystem.

Another neat feature was running tests on supported operatingsystems in
metadata.json. This will need some more controls because most
environments can't handle all in parallel I'm confident this could be
achieved easily.

Overall it looks like the tools we module developers want, will make our
lives easier and lead to higher quality modules.

What I forgot to ask: will there be people who worked on this at
cfgmgmtcamp? I'd like to chat some more about it and play with it. It
also means I'll need to modify my beaker talk there a bit :)

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>We'd like to let you know about some new capabilities that we're working on
>currently to improve acceptance testing of Puppet modules. Acceptance
>testing Puppet modules is tricky and often relies on tools such as Beaker
><https://github.com/puppetlabs/beaker>. There are some scenarios that are
>difficult to test for in modules, such as running against multiple target
>OSes, or just running a single acceptance test in the suite.
>
>We're working on a new tool which aims to address these shortcomings. It
>helps with acceptance testing by setting up the test environment and
>provides 4 main capabilities: provision a machine, install the Puppet
>agent, install a module and run acceptance tests. These steps are broken
>down into a series of rake tasks. Under the hood the tool makes heavy use
>of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>parallel acceptance test runs across multiple machines and running
>acceptance tests against localhost. It uses rake tasks rather than relying
>on a configuration file with a single complex CLI command.
>
>We'd love to get some feedback on the tool before we start using it with
>our modules and open up the repo to the public. If anyone is interested in
>learning more about the work and providing feedback please reply to this
>email. I'll set up a meeting when we'll walk through the tool, request
>feedback and provide access to the repo for those that would like to
>contribute.





--
Jo Rhett