Date   

Re: Deprecate voxpupuli/nscd in favor of ghoneycutt/nscd

Steve Traylen
 

The VP module is probably more modern currently. It's mostly missing support for other OSes and exposing every variable . Adding those things is probably similar to modernising GC module. 
Have been planning to add the missing stuff to VP one . Was stuck behind a MR that has been there for ages.

I say we just merge the functionality of the two modules. Nothing is wrong in either module.

On Tue, 11 Jun 2019, 19:10 James Powis, <powisj@...> wrote:
My largest concern comes from deprecating a community supported module in favor of a single individual supported module. Humans die easily, groups die less easily.

Also Vox really needs some level of guidelines for how a module leaves Vox and / or is deprecated in favor of something else.

Thanks,

James R. Powis


On Mon, Jun 10, 2019 at 1:05 PM Garrett Honeycutt <gh@...> wrote:
Hello,

Would like to propose deprecating voxpupuli/nscd[1] in favor of
ghoneycutt/nscd[2].

My version has all the functionality present in VP's version and is
feature complete to the documentation on all supported platforms. VP's
supports EL 6 and 7 and mine supports EL 5-7, Amazon Linux, Debian 6,
Solaris 10, Suse 10-12, 15, OpenSuse 13 and Ubuntu 12. They are both
under the Apache v2 license.

What my module is missing is strong data types, puppet-strings
documentation (though every parameter is already documented), hiera data
in module and acceptance tests. It does have comprehensive spec tests
and is widely used in production.

[1] - https://github.com/voxpupuli/puppet-nscd
[2] - https://github.com/ghoneycutt/puppet-module-nscd

Best regards,
-g

--
Garrett Honeycutt
@learnpuppet
Puppet Training with LearnPuppet.com
Mobile: +1.206.414.8658




Re: Deprecate voxpupuli/nscd in favor of ghoneycutt/nscd

James Powis
 

My largest concern comes from deprecating a community supported module in favor of a single individual supported module. Humans die easily, groups die less easily.

Also Vox really needs some level of guidelines for how a module leaves Vox and / or is deprecated in favor of something else.

Thanks,

James R. Powis


On Mon, Jun 10, 2019 at 1:05 PM Garrett Honeycutt <gh@...> wrote:
Hello,

Would like to propose deprecating voxpupuli/nscd[1] in favor of
ghoneycutt/nscd[2].

My version has all the functionality present in VP's version and is
feature complete to the documentation on all supported platforms. VP's
supports EL 6 and 7 and mine supports EL 5-7, Amazon Linux, Debian 6,
Solaris 10, Suse 10-12, 15, OpenSuse 13 and Ubuntu 12. They are both
under the Apache v2 license.

What my module is missing is strong data types, puppet-strings
documentation (though every parameter is already documented), hiera data
in module and acceptance tests. It does have comprehensive spec tests
and is widely used in production.

[1] - https://github.com/voxpupuli/puppet-nscd
[2] - https://github.com/ghoneycutt/puppet-module-nscd

Best regards,
-g

--
Garrett Honeycutt
@learnpuppet
Puppet Training with LearnPuppet.com
Mobile: +1.206.414.8658




Deprecate voxpupuli/nscd in favor of ghoneycutt/nscd

Garrett Honeycutt
 

Hello,

Would like to propose deprecating voxpupuli/nscd[1] in favor of
ghoneycutt/nscd[2].

My version has all the functionality present in VP's version and is
feature complete to the documentation on all supported platforms. VP's
supports EL 6 and 7 and mine supports EL 5-7, Amazon Linux, Debian 6,
Solaris 10, Suse 10-12, 15, OpenSuse 13 and Ubuntu 12. They are both
under the Apache v2 license.

What my module is missing is strong data types, puppet-strings
documentation (though every parameter is already documented), hiera data
in module and acceptance tests. It does have comprehensive spec tests
and is widely used in production.

[1] - https://github.com/voxpupuli/puppet-nscd
[2] - https://github.com/ghoneycutt/puppet-module-nscd

Best regards,
-g

--
Garrett Honeycutt
@learnpuppet
Puppet Training with LearnPuppet.com
Mobile: +1.206.414.8658


New Forge API Endpoints

Nik Anderson
 
Edited

Hello,
 
The Forge team is getting ready to announce new API endpoints in a blog post (draft below), and we wanted to give you all a heads up beforehand. In short, we've added public endpoints for module management, and authentication through API keys.
 
We're planning to add support to the puppet_forge gem for the new endpoints in the future. Does it seem like a possibility for the Blacksmith gem to be updated to use the new API key auth? Anything we can do to help with that?
 
Thanks!
 

Nik Anderson

Software Engineer

Pronouns: he / his

nik.anderson@...

Puppet. The shortest path to better software.
 
 
DRAFT: New Forge API Endpoints for Automating Module Management
 
Fully automating the lifecycle of your Puppet module - it's the stuff dreams are made of. Ok, maybe not so much for the general public, but if you're a module developer, this is an exciting prospect! That's why we're pleased to announce a new collection of Forge API endpoints that allow for complete programmatic module management. What does that mean? It's now possible to log in to your Forge account, create an API key, and use that key to publish, delete, or deprecate any of your modules directly through the Forge API.

## Current publishing options Taking a step or two back, we know there are a lot of module developers out there using [Blacksmith](
https://github.com/voxpupuli/puppet-blacksmith) to publish their modules, so you may be wondering how the new endpoints are an improvement. Blacksmith is a great tool that the Puppet community built in part to fill the gaps in the Forge API. However, some of the prior limitations in the API meant that the publishing flow of the Blacksmith implementation is somewhat less than ideal. For example, since we hadn't yet implemented authentication through API keys, the Blacksmith workflow involves passing a Forge username and password in plain text. This made automated publishing possible, but we’re excited to be able to provide a more standard authentication flow.

## Beyond just publishing Initially we only set out to create a publish endpoint within our v3 namespace for the Forge API. After thinking about the work this would entail and the community needs, we decided it was important to add endpoints for other essential module management tasks as well, namely deletion and deprecation. We ended up adding the following endpoints:
* `POST /v3/releases`
* `DELETE /v3/releases/<release-slug>`
* `DELETE /v3/modules/<module-slug>`
* `PATCH /v3/modules/<module-slug>` (used for module deprecation)

With that, it's possible to automate the entire module lifecycle. Here's an outline of what those steps might look like using `curl`:

To publish a new module release using curl, you can run this command:
~~~ $ curl -D- --request 'POST' '
https://forgeapi.puppet.com/v3/releases' \ -F file=@nkanderson-testmodule-0.1.0.tar.gz \ -H 'Authorization: Bearer <REPLACE WITH YOUR API KEY>' ~~~

And to mark a module release as deleted:
~~~ $ curl -D- --request DELETE \ '
https://forgeapi.puppet.com/v3/releases/nkanderson-testmodule-0.1.0?reason=buggy+release' \ -H 'Authorization: Bearer <REPLACE WITH YOUR API KEY>' ~~~

Deleting the entire module will mark all individual releases as deleted, effectively removing it from the Forge web interface:
~~~ $ curl -D- --request DELETE \ '
https://forgeapi.puppet.com/v3/modules/nkanderson-testmodule?reason=buggy+module' \ -H 'Authorization: Bearer <REPLACE WITH YOUR API KEY>' ~~~

To mark a module as deprecated, use the PATCH method. Note that json data is required to specify the deprecate action. Optional parameters include the reason for deprecation and the slug for a replacement module.
~~~ $ curl -D- --request PATCH '
https://forgeapi.puppet.com/v3/modules/nkanderson-testmodule' \ -d '{"action": "deprecate", "params": {"reason": "No longer maintained", "replacement_slug": "puppetlabs-mysql"} }' \ -H 'Content-Type: application/json' -H 'Authorization: Bearer <REPLACE WITH YOUR API KEY>' ~~~

## Bonus: Updated docs! Once we got into the documentation phase for these new endpoints, we realized our Forge API docs could use a little love. So another improvement we made along the way was to update our docs framework. Beyond just a fresh coat of paint, the new docs are clearer and easier to navigate, with added descriptions for a handful of parameters that hadn’t previously surfaced from implementation to the docs. (Screenshot of New Forge API docs)

Want to try it out? Log in to your Forge account, navigate to your user profile page (hint: click your name in the upper right), and on that page you'll see an option to create a new key. Once you've created a key, you're all set to hit the ground running with automating your module management.


Litmus

Davin Hanlon
 

Hi everyone,


The modules team at Puppet have been working on a new project called Litmus. It is a framework for acceptance testing Puppet modules. We are in the process of testing this out with our supported modules and MOTD is the first module that’s been converted - see the PR here.


Litmus provides:

  • An interactive workflow, allowing you to provision nodes, install the agent, install the module and run acceptance tests.

  • An extensible framework, allowing additional provisioners or test frameworks to be added.

As part of adopting Litmus we are performing more acceptance testing in Travis and Appveyor, reducing our dependence on internal pipelines. This will means that contributors will get more extensive feedback when they submit pull requests on modules that use Litmus for acceptance testing.

We encourage you to have a look at the Litmus wiki here. It has a guide for working with MOTD, and also guides on how to convert existing modules to use Limus for acceptance tests. We will be migrating Puppet supported modules to use Litmus over the coming weeks and months. This is the first iteration of Litmus, we plan to continue to add functionality over the coming months to solve more complex use cases and with the goal of being the default acceptance tool for Puppet modules.


If you have any questions or queries please raise issues on the GitHub repo and we’ll do our best to respond promptly.


Thanks.





New Blogpost: Setting up Vim for modern Puppet development

 

Hey people!
One of our members, dhollinger, wrote a good article about setting up
Vim for modern Puppet development. Check it out at:
https://voxpupuli.org/blog/2019/04/08/puppet-lsp-vim/

Do you have a good idea for another blog post or want to write one? Let
us know or provide a PR:
https://github.com/voxpupuli/voxpupuli.github.io/tree/master/_posts


Cheers, Tim


Re: GitHub Vox Pupuli Community

 

Hi Tommy,
thanks for asking! Did you already work on one of our repositories? Are
you interested in any particular project?

Cheers,
Tim

On 3/13/19 7:18 AM, via Groups.Io wrote:
Hey,

May I join your GitHub Vox Pupuli Puppet module and tooling community: https://github.com/voxpupuli ?

I’m a network engineer working on the DevOps approach to network engineering, with a focus on Puppet and automation.

Cheers,
Tommy
GitHub Username: techietommy
https://github.com/techietommy




GitHub Vox Pupuli Community

Tommy Chong <techietommy@...>
 

Hey,

May I join your GitHub Vox Pupuli Puppet module and tooling community: https://github.com/voxpupuli ?

I’m a network engineer working on the DevOps approach to network engineering, with a focus on Puppet and automation.

Cheers,
Tommy
GitHub Username: techietommy
https://github.com/techietommy


Splunk 7.2

David Decker <deckercdavid@...>
 

Looking for information about splunk 7.2.4 and puppet I read a bit back that puppet was not working well with the 7.1 with the username/password creation.  

Just wanted to see if this is an issue and if it's been resolved. 

Thanks
David


[Announce] Release of Puppet Development Kit v1.9.1

Bryan Jen
 

Hello Everyone,

Puppet Development Kit (PDK) provides integrated testing tools and a command line interface to help you develop, validate, and test modules.  We are excited to announce that PDK v1.9.1 is now available to download.

PDK v1.9.1 is a bugfix release and we've added some much needed fixes to the new YAML validator introduced in v1.9.0, as well as other a few quality of life fixes to `pdk convert` and PATH resolution on package installations. For more information please refer to the release notes

We'd also like to give a big shoutout to our open source community for their valuable contributions to the pdk-templates packaged in this release! A list of these contributions include:

Next steps:
Feedback is always appreciated and can be provided via the '#team-pdk' room in Slack or the PDK project in JIRA.

Regards,
Bryan Jen


Newbie question on development setup

martijn.pepping@...
 

Hi,

In an attempt to actively contribute on module issues and features, I'm looking voor suggestions for a good development setup. I'm using Vagrant+Fusion and Docker locally, and AWS remote. Are there any boxes, container-images or AMI's to hit the ground running, in favour of rolling my own? Just looking for best practices instead of finding out the basic plumbing myself.

Thanks in advance, Martijn.


Re: New Acceptance Testing tool

 

> Everyone on the invite should have access to the repo now: https://github.com/puppetlabs/solid-waffle.

Not me, I get a 404 for that.  Sorry I couldn't make the meeting, but 7am is crazy time in my household ;-)

On Fri, Jan 25, 2019 at 3:32 AM Davin Hanlon <davin.hanlon@...> wrote:
Thanks for the note Ewoud, and thanks to everyone that was able to join.

Everyone on the invite should have access to the repo now: https://github.com/puppetlabs/solid-waffle. If not, drop me a note with your GitHub ID and I'll add you. We're going through a process internally to rename the tool, and once that's done we'll update the name everywhere, and then make the repo public.

Unfortunately, neither TP nor Helen are at cfgmgmtcamp this year (David Schmitt will be there but, while he sits next to both TP and Helen, he hasn't been involved in this effort to date).

For now, the best place to raise queries/concerns/enhancements is GitHub issues - we'll do our best to get back asap. Hopefully that works for everyone.

Let me know if you've any questions.

Thanks,
Davin

On Thu, 24 Jan 2019 at 17:24, Ewoud Kohl van Wijngaarden <ewoud+voxpupuli@...> wrote:
My quick impression of the presentation/tool is that the Puppet team
really understood the shortcomings of beaker and what you want to do as
a module developer. It's clear to me that they're actually module
developers who use this on a daily basis.

The tool itself has some clear limitations that show it's still in early
development. No vagrant backend was one that multiple people noticed,
but the design already has multiple backends so adding another should be
fairly straight forward.

The separation of provisioning and test running was something I really
liked. This means you can easily provision, test, change the module,
re-test until it's finished. That also means the spec helper can be less
magical. It also uses the .fixtures.yml we already use for other rspec
testing so it nicely unifies the ecosystem.

Another neat feature was running tests on supported operatingsystems in
metadata.json. This will need some more controls because most
environments can't handle all in parallel I'm confident this could be
achieved easily.

Overall it looks like the tools we module developers want, will make our
lives easier and lead to higher quality modules.

What I forgot to ask: will there be people who worked on this at
cfgmgmtcamp? I'd like to chat some more about it and play with it. It
also means I'll need to modify my beaker talk there a bit :)

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>We'd like to let you know about some new capabilities that we're working on
>currently to improve acceptance testing of Puppet modules. Acceptance
>testing Puppet modules is tricky and often relies on tools such as Beaker
><https://github.com/puppetlabs/beaker>. There are some scenarios that are
>difficult to test for in modules, such as running against multiple target
>OSes, or just running a single acceptance test in the suite.
>
>We're working on a new tool which aims to address these shortcomings. It
>helps with acceptance testing by setting up the test environment and
>provides 4 main capabilities: provision a machine, install the Puppet
>agent, install a module and run acceptance tests. These steps are broken
>down into a series of rake tasks. Under the hood the tool makes heavy use
>of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>parallel acceptance test runs across multiple machines and running
>acceptance tests against localhost. It uses rake tasks rather than relying
>on a configuration file with a single complex CLI command.
>
>We'd love to get some feedback on the tool before we start using it with
>our modules and open up the repo to the public. If anyone is interested in
>learning more about the work and providing feedback please reply to this
>email. I'll set up a meeting when we'll walk through the tool, request
>feedback and provide access to the repo for those that would like to
>contribute.





--
Jo Rhett


Re: New Acceptance Testing tool

Davin Hanlon
 

Thanks for the note Ewoud, and thanks to everyone that was able to join.

Everyone on the invite should have access to the repo now: https://github.com/puppetlabs/solid-waffle. If not, drop me a note with your GitHub ID and I'll add you. We're going through a process internally to rename the tool, and once that's done we'll update the name everywhere, and then make the repo public.

Unfortunately, neither TP nor Helen are at cfgmgmtcamp this year (David Schmitt will be there but, while he sits next to both TP and Helen, he hasn't been involved in this effort to date).

For now, the best place to raise queries/concerns/enhancements is GitHub issues - we'll do our best to get back asap. Hopefully that works for everyone.

Let me know if you've any questions.

Thanks,
Davin

On Thu, 24 Jan 2019 at 17:24, Ewoud Kohl van Wijngaarden <ewoud+voxpupuli@...> wrote:
My quick impression of the presentation/tool is that the Puppet team
really understood the shortcomings of beaker and what you want to do as
a module developer. It's clear to me that they're actually module
developers who use this on a daily basis.

The tool itself has some clear limitations that show it's still in early
development. No vagrant backend was one that multiple people noticed,
but the design already has multiple backends so adding another should be
fairly straight forward.

The separation of provisioning and test running was something I really
liked. This means you can easily provision, test, change the module,
re-test until it's finished. That also means the spec helper can be less
magical. It also uses the .fixtures.yml we already use for other rspec
testing so it nicely unifies the ecosystem.

Another neat feature was running tests on supported operatingsystems in
metadata.json. This will need some more controls because most
environments can't handle all in parallel I'm confident this could be
achieved easily.

Overall it looks like the tools we module developers want, will make our
lives easier and lead to higher quality modules.

What I forgot to ask: will there be people who worked on this at
cfgmgmtcamp? I'd like to chat some more about it and play with it. It
also means I'll need to modify my beaker talk there a bit :)

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>We'd like to let you know about some new capabilities that we're working on
>currently to improve acceptance testing of Puppet modules. Acceptance
>testing Puppet modules is tricky and often relies on tools such as Beaker
><https://github.com/puppetlabs/beaker>. There are some scenarios that are
>difficult to test for in modules, such as running against multiple target
>OSes, or just running a single acceptance test in the suite.
>
>We're working on a new tool which aims to address these shortcomings. It
>helps with acceptance testing by setting up the test environment and
>provides 4 main capabilities: provision a machine, install the Puppet
>agent, install a module and run acceptance tests. These steps are broken
>down into a series of rake tasks. Under the hood the tool makes heavy use
>of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>parallel acceptance test runs across multiple machines and running
>acceptance tests against localhost. It uses rake tasks rather than relying
>on a configuration file with a single complex CLI command.
>
>We'd love to get some feedback on the tool before we start using it with
>our modules and open up the repo to the public. If anyone is interested in
>learning more about the work and providing feedback please reply to this
>email. I'll set up a meeting when we'll walk through the tool, request
>feedback and provide access to the repo for those that would like to
>contribute.




Re: New Acceptance Testing tool

Ewoud Kohl van Wijngaarden
 

My quick impression of the presentation/tool is that the Puppet team really understood the shortcomings of beaker and what you want to do as a module developer. It's clear to me that they're actually module developers who use this on a daily basis.

The tool itself has some clear limitations that show it's still in early development. No vagrant backend was one that multiple people noticed, but the design already has multiple backends so adding another should be fairly straight forward.

The separation of provisioning and test running was something I really liked. This means you can easily provision, test, change the module, re-test until it's finished. That also means the spec helper can be less magical. It also uses the .fixtures.yml we already use for other rspec testing so it nicely unifies the ecosystem.

Another neat feature was running tests on supported operatingsystems in metadata.json. This will need some more controls because most environments can't handle all in parallel I'm confident this could be achieved easily.

Overall it looks like the tools we module developers want, will make our lives easier and lead to higher quality modules.

What I forgot to ask: will there be people who worked on this at cfgmgmtcamp? I'd like to chat some more about it and play with it. It also means I'll need to modify my beaker talk there a bit :)

On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.


Re: New Acceptance Testing tool

Nick MIller
 

I think you've gotten the same feeling writing plans as I have, so I'm very interested in what ideas you have for a beaker replacement. Send an invitation my way, please! 


On Thu, Jan 10, 2019, 6:23 AM Davin Hanlon <davin.hanlon@... wrote:
Hi all!

We'd like to let you know about some new capabilities that we're working on currently to improve acceptance testing of Puppet modules. Acceptance testing Puppet modules is tricky and often relies on tools such as Beaker. There are some scenarios that are difficult to test for in modules, such as running against multiple target OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It helps with acceptance testing by setting up the test environment and provides 4 main capabilities: provision a machine, install the Puppet agent, install a module and run acceptance tests. These steps are broken down into a series of rake tasks. Under the hood the tool makes heavy use of Bolt. The tool supports parallel acceptance test runs across multiple machines and running acceptance tests against localhost. It uses rake tasks rather than relying on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with our modules and open up the repo to the public. If anyone is interested in learning more about the work and providing feedback please reply to this email. I'll set up a meeting when we'll walk through the tool, request feedback and provide access to the repo for those that would like to contribute.

Please reply to this email if you're interested in helping us and I'll include you on the meeting invite!

Thanks for your time.

The Puppet Modules team


Re: New Acceptance Testing tool

Bram Vogelaar
 

i would be very interested too, beaker has been very frustrating to use, kitchen-ci is much better to use, but is single node only by design.


On Mon, 14 Jan 2019 at 15:42, Rob Nelson <rnelson0@...> wrote:
If it's not too late, please add me to the invite list. I'm in Eastern but should be able to find a way to make it work.

Rob Nelson
rnelson0@...


On Sun, Jan 13, 2019 at 4:40 PM Tim Meusel <tim@...> wrote:
Hi everybody,

On 10.01.19 15:57, Martin Alfke wrote:
> Hi Davin,
>
> I am also interested to learn about acceptance testing using bolt.
> Next week at afternoon EMEA time sounds great for me.
>

As Martin, I'm interested as well and in the same timezone.

Cheers, Tim

> Best,
> Martin
>
>
>> On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@...> wrote:
>>
>> Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.
>>
>> Let me know if you've any questions or comments!
>>
>> Thanks,
>> Davin
>>
>>
>> On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
>> On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>>> We'd like to let you know about some new capabilities that we're working on
>>> currently to improve acceptance testing of Puppet modules. Acceptance
>>> testing Puppet modules is tricky and often relies on tools such as Beaker
>>> <https://github.com/puppetlabs/beaker>. There are some scenarios that are
>>> difficult to test for in modules, such as running against multiple target
>>> OSes, or just running a single acceptance test in the suite.
>>>
>>> We're working on a new tool which aims to address these shortcomings. It
>>> helps with acceptance testing by setting up the test environment and
>>> provides 4 main capabilities: provision a machine, install the Puppet
>>> agent, install a module and run acceptance tests. These steps are broken
>>> down into a series of rake tasks. Under the hood the tool makes heavy use
>>> of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>>> parallel acceptance test runs across multiple machines and running
>>> acceptance tests against localhost. It uses rake tasks rather than relying
>>> on a configuration file with a single complex CLI command.
>>>
>>> We'd love to get some feedback on the tool before we start using it with
>>> our modules and open up the repo to the public. If anyone is interested in
>>> learning more about the work and providing feedback please reply to this
>>> email. I'll set up a meeting when we'll walk through the tool, request
>>> feedback and provide access to the repo for those that would like to
>>> contribute.
>>>
>>> Please reply to this email if you're interested in helping us and I'll
>>> include you on the meeting invite!
>>
>> That does sound very interesting.
>>
>> My main issue with beaker is that it's heavily underdocumented.
>> Essentially I always grep through the source to find out how it works.
>>
>> I am very interested, but not sure if I can commit to a meeting. A
>> meeting is not a great way to spread information. Blogs get indexed by
>> search engines and you can easily refer others to it. Please consider
>> writing a blog post introducing the tool. (Shout out to Henrik's
>> https://puppet-on-the-edge.blogspot.com/ for a great example.)
>>
>>
>>
>>
>
>
>
>





Re: New Acceptance Testing tool

Rob Nelson
 

If it's not too late, please add me to the invite list. I'm in Eastern but should be able to find a way to make it work.

Rob Nelson
rnelson0@...


On Sun, Jan 13, 2019 at 4:40 PM Tim Meusel <tim@...> wrote:
Hi everybody,

On 10.01.19 15:57, Martin Alfke wrote:
> Hi Davin,
>
> I am also interested to learn about acceptance testing using bolt.
> Next week at afternoon EMEA time sounds great for me.
>

As Martin, I'm interested as well and in the same timezone.

Cheers, Tim

> Best,
> Martin
>
>
>> On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@...> wrote:
>>
>> Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.
>>
>> Let me know if you've any questions or comments!
>>
>> Thanks,
>> Davin
>>
>>
>> On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@...> wrote:
>> On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
>>> We'd like to let you know about some new capabilities that we're working on
>>> currently to improve acceptance testing of Puppet modules. Acceptance
>>> testing Puppet modules is tricky and often relies on tools such as Beaker
>>> <https://github.com/puppetlabs/beaker>. There are some scenarios that are
>>> difficult to test for in modules, such as running against multiple target
>>> OSes, or just running a single acceptance test in the suite.
>>>
>>> We're working on a new tool which aims to address these shortcomings. It
>>> helps with acceptance testing by setting up the test environment and
>>> provides 4 main capabilities: provision a machine, install the Puppet
>>> agent, install a module and run acceptance tests. These steps are broken
>>> down into a series of rake tasks. Under the hood the tool makes heavy use
>>> of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
>>> parallel acceptance test runs across multiple machines and running
>>> acceptance tests against localhost. It uses rake tasks rather than relying
>>> on a configuration file with a single complex CLI command.
>>>
>>> We'd love to get some feedback on the tool before we start using it with
>>> our modules and open up the repo to the public. If anyone is interested in
>>> learning more about the work and providing feedback please reply to this
>>> email. I'll set up a meeting when we'll walk through the tool, request
>>> feedback and provide access to the repo for those that would like to
>>> contribute.
>>>
>>> Please reply to this email if you're interested in helping us and I'll
>>> include you on the meeting invite!
>>
>> That does sound very interesting.
>>
>> My main issue with beaker is that it's heavily underdocumented.
>> Essentially I always grep through the source to find out how it works.
>>
>> I am very interested, but not sure if I can commit to a meeting. A
>> meeting is not a great way to spread information. Blogs get indexed by
>> search engines and you can easily refer others to it. Please consider
>> writing a blog post introducing the tool. (Shout out to Henrik's
>> https://puppet-on-the-edge.blogspot.com/ for a great example.)
>>
>>
>>
>>
>
>
>
>





Re: New Acceptance Testing tool

 

Hi everybody,

On 10.01.19 15:57, Martin Alfke wrote:
Hi Davin,

I am also interested to learn about acceptance testing using bolt.
Next week at afternoon EMEA time sounds great for me.
As Martin, I'm interested as well and in the same timezone.

Cheers, Tim

Best,
Martin


On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@puppet.com> wrote:

Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.

Let me know if you've any questions or comments!

Thanks,
Davin


On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@kohlvanwijngaarden.nl> wrote:
On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.

Please reply to this email if you're interested in helping us and I'll
include you on the meeting invite!
That does sound very interesting.

My main issue with beaker is that it's heavily underdocumented.
Essentially I always grep through the source to find out how it works.

I am very interested, but not sure if I can commit to a meeting. A
meeting is not a great way to spread information. Blogs get indexed by
search engines and you can easily refer others to it. Please consider
writing a blog post introducing the tool. (Shout out to Henrik's
https://puppet-on-the-edge.blogspot.com/ for a great example.)






Re: New Acceptance Testing tool

David Hollinger
 

Hi David,

I'd also be interested in learning about acceptance testing with Bolt. I'm in the Central time zone, but my time will be very limited during the day next week due to a company event. Feasibly the only days I'm available would be Monday or Tuesday


Re: New Acceptance Testing tool

Martin Alfke
 

Hi Davin,

I am also interested to learn about acceptance testing using bolt.
Next week at afternoon EMEA time sounds great for me.

Best,
Martin

On 10. Jan 2019, at 15:50, Davin Hanlon <davin.hanlon@puppet.com> wrote:

Hi Ewoud - I agree that meetings are poor at spreading information. Our long-term plan with this tool is to open the repo, and provide good documentation - including blogs and other resources to help people use it and contribute. The initial meeting and request for feedback is to get an early sense check from experienced folks to understand: if the approach we're taking is good, what functional gaps exist and a feeling for what issues we may find. I'll include you on the invite (next week in the afternoon EMEA time), but appreciate that you may not attend. Also, if you provide your GitHub ID I'll add you to the repo in any case.

Let me know if you've any questions or comments!

Thanks,
Davin


On Thu, 10 Jan 2019 at 13:12, Ewoud Kohl van Wijngaarden <ewoud@kohlvanwijngaarden.nl> wrote:
On Thu, Jan 10, 2019 at 11:23:05AM +0000, Davin Hanlon wrote:
We'd like to let you know about some new capabilities that we're working on
currently to improve acceptance testing of Puppet modules. Acceptance
testing Puppet modules is tricky and often relies on tools such as Beaker
<https://github.com/puppetlabs/beaker>. There are some scenarios that are
difficult to test for in modules, such as running against multiple target
OSes, or just running a single acceptance test in the suite.

We're working on a new tool which aims to address these shortcomings. It
helps with acceptance testing by setting up the test environment and
provides 4 main capabilities: provision a machine, install the Puppet
agent, install a module and run acceptance tests. These steps are broken
down into a series of rake tasks. Under the hood the tool makes heavy use
of Bolt <https://puppet.com/products/puppet-bolt>. The tool supports
parallel acceptance test runs across multiple machines and running
acceptance tests against localhost. It uses rake tasks rather than relying
on a configuration file with a single complex CLI command.

We'd love to get some feedback on the tool before we start using it with
our modules and open up the repo to the public. If anyone is interested in
learning more about the work and providing feedback please reply to this
email. I'll set up a meeting when we'll walk through the tool, request
feedback and provide access to the repo for those that would like to
contribute.

Please reply to this email if you're interested in helping us and I'll
include you on the meeting invite!
That does sound very interesting.

My main issue with beaker is that it's heavily underdocumented.
Essentially I always grep through the source to find out how it works.

I am very interested, but not sure if I can commit to a meeting. A
meeting is not a great way to spread information. Blogs get indexed by
search engines and you can easily refer others to it. Please consider
writing a blog post introducing the tool. (Shout out to Henrik's
https://puppet-on-the-edge.blogspot.com/ for a great example.)



81 - 100 of 372