Date   

Increase the value of kernelci.org

Julia Amanda <julia.amanda8822@...>
 

Hello kernelci.org



Things which didn’t exist a few years ago, such as social media promotion are now given high importance in terms of the impact on rankings. Not building a healthy content profile lead to damage your business as it is one of the factors Google evaluates while crawling your site.

Later this year, the amount of traffic delivered through mobile devices is expected to exceed the traditional devices. With this dramatic explosion in mobile usage, a new world of effective SEO techniques has opened up for websites.

As long as you are focusing on optimal user experience while performing SEO strategies, you will be rewarded with higher positioning and organic traffic.

We cover all the major aspect of promotion under single umbrella starting from error fixation, keywords ranking, content link submission, Social media promotion etc.  

This e-mail provides you with a glimpse of services which we offer. If you require any sort of website assistance or have any query about our services, kindly reply us back.

 

 

Julia Amanda

Digital Marketing Analyst
…………………………………………………………..



Re: next/master boot bisection: next-20190215 on beaglebone-black

Mike Rapoport <rppt@...>
 

On Thu, Apr 11, 2019 at 01:08:15PM -0700, Guenter Roeck wrote:
On Thu, Apr 11, 2019 at 10:35 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Apr 11, 2019 at 9:42 AM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 9:19 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:
I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?
Can someone send me a pointer to the series in question ? I would like
to run it through my testbed.
It's already in -mm and linux-next (",mm: shuffle initial free memory
to improve memory-side-cache utilization") but it gets enabled with
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y (which was made the default briefly in
-mm which triggered problems on ARM as was reverted).
Boot tests report

Qemu test results:
total: 345 pass: 345 fail: 0

This is on top of next-20190410 with CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
and the known crashes fixed.

$ git log --oneline next-20190410..
3367c36ce744 Set SHUFFLE_PAGE_ALLOCATOR=y for testing.
d2aee8b3cd5d Revert "crypto: scompress - Use per-CPU struct instead
multiple variables"
4bc9f5bc9a84 Fix: rhashtable: use bit_spin_locks to protect hash bucket.

Boot tests on arm are:

Building arm:versatilepb:versatile_defconfig:aeabi:pci:scsi:mem128:versatile-pb:rootfs
... running ........ passed
Building arm:versatilepb:versatile_defconfig:aeabi:pci:mem128:versatile-pb:initrd
... running ........ passed
...

Building arm:witherspoon-bmc:aspeed_g5_defconfig:notests:aspeed-bmc-opp-witherspoon:initrd
... running ........... passed
Building arm:ast2500-evb:aspeed_g5_defconfig:notests:aspeed-ast2500-evb:initrd
... running ................ passed
Building arm:romulus-bmc:aspeed_g5_defconfig:notests:aspeed-bmc-opp-romulus:initrd
... running ......................... passed
Building arm:mps2-an385:mps2_defconfig:mps2-an385:initrd ... running
...... passed
The issue was with an omap2 board and, AFAIK, qemu does not simulate those.

--
Sincerely yours,
Mike.


Re: next/master boot bisection: next-20190215 on beaglebone-black

Dan Williams <dan.j.williams@...>
 

On Thu, Apr 11, 2019 at 1:08 PM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 10:35 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Apr 11, 2019 at 9:42 AM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 9:19 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:
I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?
Can someone send me a pointer to the series in question ? I would like
to run it through my testbed.
It's already in -mm and linux-next (",mm: shuffle initial free memory
to improve memory-side-cache utilization") but it gets enabled with
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y (which was made the default briefly in
-mm which triggered problems on ARM as was reverted).
Boot tests report

Qemu test results:
total: 345 pass: 345 fail: 0

This is on top of next-20190410 with CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
and the known crashes fixed.
In addition to CONFIG_SHUFFLE_PAGE_ALLOCATOR=y you also need the
kernel command line option "page_alloc.shuffle=1"

...so I doubt you are running with shuffling enabled. Another way to
double check is:

cat /sys/module/page_alloc/parameters/shuffle


Re: next/master boot bisection: next-20190215 on beaglebone-black

Guenter Roeck
 

On Thu, Apr 11, 2019 at 1:22 PM Dan Williams <dan.j.williams@intel.com> wrote:

On Thu, Apr 11, 2019 at 1:08 PM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 10:35 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Apr 11, 2019 at 9:42 AM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 9:19 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:
I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?
Can someone send me a pointer to the series in question ? I would like
to run it through my testbed.
It's already in -mm and linux-next (",mm: shuffle initial free memory
to improve memory-side-cache utilization") but it gets enabled with
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y (which was made the default briefly in
-mm which triggered problems on ARM as was reverted).
Boot tests report

Qemu test results:
total: 345 pass: 345 fail: 0

This is on top of next-20190410 with CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
and the known crashes fixed.
In addition to CONFIG_SHUFFLE_PAGE_ALLOCATOR=y you also need the
kernel command line option "page_alloc.shuffle=1"

...so I doubt you are running with shuffling enabled. Another way to
double check is:

cat /sys/module/page_alloc/parameters/shuffle
Yes, you are right. Because, with it enabled, I see:

Kernel command line: rdinit=/sbin/init page_alloc.shuffle=1 panic=-1
console=ttyAMA0,115200 page_alloc.shuffle=1
------------[ cut here ]------------
WARNING: CPU: 0 PID: 0 at ./include/linux/jump_label.h:303
page_alloc_shuffle+0x12c/0x1ac
static_key_enable(): static key 'page_alloc_shuffle_key+0x0/0x4' used
before call to jump_label_init()
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted
5.1.0-rc4-next-20190410-00003-g3367c36ce744 #1
Hardware name: ARM Integrator/CP (Device Tree)
[<c0011c68>] (unwind_backtrace) from [<c000ec48>] (show_stack+0x10/0x18)
[<c000ec48>] (show_stack) from [<c07e9710>] (dump_stack+0x18/0x24)
[<c07e9710>] (dump_stack) from [<c001bb1c>] (__warn+0xe0/0x108)
[<c001bb1c>] (__warn) from [<c001bb88>] (warn_slowpath_fmt+0x44/0x6c)
[<c001bb88>] (warn_slowpath_fmt) from [<c0b0c4a8>]
(page_alloc_shuffle+0x12c/0x1ac)
[<c0b0c4a8>] (page_alloc_shuffle) from [<c0b0c550>] (shuffle_store+0x28/0x48)
[<c0b0c550>] (shuffle_store) from [<c003e6a0>] (parse_args+0x1f4/0x350)
[<c003e6a0>] (parse_args) from [<c0ac3c00>] (start_kernel+0x1c0/0x488)
[<c0ac3c00>] (start_kernel) from [<00000000>] ( (null))

I'll re-run the test, but I suspect it will drown in warnings.

Guenter


Re: next/master boot bisection: next-20190215 on beaglebone-black

Guenter Roeck
 

On Thu, Apr 11, 2019 at 10:35 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Apr 11, 2019 at 9:42 AM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 9:19 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:
I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?
Can someone send me a pointer to the series in question ? I would like
to run it through my testbed.
It's already in -mm and linux-next (",mm: shuffle initial free memory
to improve memory-side-cache utilization") but it gets enabled with
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y (which was made the default briefly in
-mm which triggered problems on ARM as was reverted).
Boot tests report

Qemu test results:
total: 345 pass: 345 fail: 0

This is on top of next-20190410 with CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
and the known crashes fixed.

$ git log --oneline next-20190410..
3367c36ce744 Set SHUFFLE_PAGE_ALLOCATOR=y for testing.
d2aee8b3cd5d Revert "crypto: scompress - Use per-CPU struct instead
multiple variables"
4bc9f5bc9a84 Fix: rhashtable: use bit_spin_locks to protect hash bucket.

Boot tests on arm are:

Building arm:versatilepb:versatile_defconfig:aeabi:pci:scsi:mem128:versatile-pb:rootfs
... running ........ passed
Building arm:versatilepb:versatile_defconfig:aeabi:pci:mem128:versatile-pb:initrd
... running ........ passed
Building arm:versatileab:versatile_defconfig:mem128:versatile-ab:initrd
... running ........ passed
Building arm:imx25-pdk:imx_v4_v5_defconfig:nonand:mem128:imx25-pdk:initrd
... running ........ passed
Building arm:kzm:imx_v6_v7_defconfig:nodrm:mem128:initrd ... running
.......... passed
Building arm:mcimx6ul-evk:imx_v6_v7_defconfig:nodrm:mem256:imx6ul-14x14-evk:initrd
... running .......... passed
Building arm:mcimx6ul-evk:imx_v6_v7_defconfig:nodrm:sd:mem256:imx6ul-14x14-evk:rootfs
... running .......... passed
Building arm:vexpress-a9:multi_v7_defconfig:nolocktests:mem128:vexpress-v2p-ca9:initrd
... running ........ passed
Building arm:vexpress-a9:multi_v7_defconfig:nolocktests:sd:mem128:vexpress-v2p-ca9:rootfs
... running ........ passed
Building arm:vexpress-a9:multi_v7_defconfig:nolocktests:virtio-blk:mem128:vexpress-v2p-ca9:rootfs
... running ........ passed
Building arm:vexpress-a15:multi_v7_defconfig:nolocktests:sd:mem128:vexpress-v2p-ca15-tc1:rootfs
... running ........ passed
Building arm:vexpress-a15-a7:multi_v7_defconfig:nolocktests:sd:mem256:vexpress-v2p-ca15_a7:rootfs
... running ........ passed
Building arm:beagle:multi_v7_defconfig:sd:mem256:omap3-beagle:rootfs
... running ............ passed
Building arm:beaglexm:multi_v7_defconfig:sd:mem512:omap3-beagle-xm:rootfs
... running ........... passed
Building arm:overo:multi_v7_defconfig:sd:mem256:omap3-overo-tobi:rootfs
... running ........... passed
Building arm:midway:multi_v7_defconfig:mem2G:ecx-2000:initrd ...
running .......... passed
Building arm:sabrelite:multi_v7_defconfig:mem256:imx6dl-sabrelite:initrd
... running ............ passed
Building arm:mcimx7d-sabre:multi_v7_defconfig:mem256:imx7d-sdb:initrd
... running .......... passed
Building arm:xilinx-zynq-a9:multi_v7_defconfig:mem128:zynq-zc702:initrd
... running ............ passed
Building arm:xilinx-zynq-a9:multi_v7_defconfig:sd:mem128:zynq-zc702:rootfs
... running ............ passed
Building arm:xilinx-zynq-a9:multi_v7_defconfig:sd:mem128:zynq-zc706:rootfs
... running ............ passed
Building arm:xilinx-zynq-a9:multi_v7_defconfig:sd:mem128:zynq-zed:rootfs
... running ........... passed
Building arm:cubieboard:multi_v7_defconfig:mem128:sun4i-a10-cubieboard:initrd
... running ........... passed
Building arm:raspi2:multi_v7_defconfig:bcm2836-rpi-2-b:initrd ...
running .......... passed
Building arm:raspi2:multi_v7_defconfig:sd:bcm2836-rpi-2-b:rootfs ...
running .......... passed
Building arm:virt:multi_v7_defconfig:virtio-blk:mem512:rootfs ...
running ......... passed
Building arm:smdkc210:exynos_defconfig:cpuidle:nocrypto:mem128:exynos4210-smdkv310:initrd
... running ......... passed
Building arm:realview-pb-a8:realview_defconfig:realview_pb:mem512:arm-realview-pba8:initrd
... running ........ passed
Building arm:realview-pbx-a9:realview_defconfig:realview_pb:arm-realview-pbx-a9:initrd
... running ........ passed
Building arm:realview-eb:realview_defconfig:realview_eb:mem512:arm-realview-eb:initrd
... running ........ passed
Building arm:realview-eb-mpcore:realview_defconfig:realview_eb:mem512:arm-realview-eb-11mp-ctrevb:initrd
... running ......... passed
Building arm:akita:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:borzoi:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:mainstone:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:spitz:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:terrier:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:tosa:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:z2:pxa_defconfig:nofdt:nodebug:notests:novirt:nousb:noscsi:initrd
... running ..... passed
Building arm:collie:collie_defconfig:aeabi:notests:initrd ... running
..... passed
Building arm:integratorcp:integrator_defconfig:mem128:integratorcp:initrd
... running ....... passed
Building arm:palmetto-bmc:aspeed_g4_defconfig:aspeed-bmc-opp-palmetto:initrd
... running ................. passed
Building arm:witherspoon-bmc:aspeed_g5_defconfig:notests:aspeed-bmc-opp-witherspoon:initrd
... running ........... passed
Building arm:ast2500-evb:aspeed_g5_defconfig:notests:aspeed-ast2500-evb:initrd
... running ................ passed
Building arm:romulus-bmc:aspeed_g5_defconfig:notests:aspeed-bmc-opp-romulus:initrd
... running ......................... passed
Building arm:mps2-an385:mps2_defconfig:mps2-an385:initrd ... running
...... passed

Guenter


Re: next/master boot bisection: next-20190215 on beaglebone-black

Kees Cook <keescook@...>
 

On Thu, Apr 11, 2019 at 9:42 AM Guenter Roeck <groeck@google.com> wrote:

On Thu, Apr 11, 2019 at 9:19 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:
I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?
Can someone send me a pointer to the series in question ? I would like
to run it through my testbed.
It's already in -mm and linux-next (",mm: shuffle initial free memory
to improve memory-side-cache utilization") but it gets enabled with
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y (which was made the default briefly in
-mm which triggered problems on ARM as was reverted).

--
Kees Cook


Re: next/master boot bisection: next-20190215 on beaglebone-black

Guenter Roeck
 

On Thu, Apr 11, 2019 at 9:19 AM Kees Cook <keescook@chromium.org> wrote:

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:

On Thu, Mar 7, 2019 at 1:17 AM Guillaume Tucker
<guillaume.tucker@collabora.com> wrote:

On 06/03/2019 14:05, Mike Rapoport wrote:
On Wed, Mar 06, 2019 at 10:14:47AM +0000, Guillaume Tucker wrote:
On 01/03/2019 23:23, Dan Williams wrote:
On Fri, Mar 1, 2019 at 1:05 PM Guillaume Tucker
<guillaume.tucker@collabora.com> wrote:

Is there an early-printk facility that can be turned on to see how far
we get in the boot?
Yes, I've done that now by enabling CONFIG_DEBUG_AM33XXUART1 and
earlyprintk in the command line. Here's the result, with the
commit cherry picked on top of next-20190304:

https://lava.collabora.co.uk/scheduler/job/1526326

[ 1.379522] ti-sysc 4804a000.target-module: sysc_flags 00000222 != 00000022
[ 1.396718] Unable to handle kernel paging request at virtual address 77bb4003
[ 1.404203] pgd = (ptrval)
[ 1.406971] [77bb4003] *pgd=00000000
[ 1.410650] Internal error: Oops: 5 [#1] ARM
[...]
[ 1.672310] [<c07051a0>] (clk_hw_create_clk.part.21) from [<c06fea34>] (devm_clk_get+0x4c/0x80)
[ 1.681232] [<c06fea34>] (devm_clk_get) from [<c064253c>] (sysc_probe+0x28c/0xde4)

It's always failing at that point in the code. Also when
enabling "debug" on the kernel command line, the issue goes
away (exact same binaries etc..):

https://lava.collabora.co.uk/scheduler/job/1526327

For the record, here's the branch I've been using:

https://gitlab.collabora.com/gtucker/linux/tree/beaglebone-black-next-20190304-debug

The board otherwise boots fine with next-20190304 (SMP=n), and
also with the patch applied but the shuffle configs set to n.

Were there any boot *successes* on ARM with shuffling enabled? I.e.
clues about what's different about the specific memory setup for
beagle-bone-black.
Looking at the KernelCI results from next-20190215, it looks like
only the BeagleBone Black with SMP=n failed to boot:

https://kernelci.org/boot/all/job/next/branch/master/kernel/next-20190215/

Of course that's not all the ARM boards that exist out there, but
it's a fairly large coverage already.

As the kernel panic always seems to originate in ti-sysc.c,
there's a chance it's only visible on that platform... I'm doing
a KernelCI run now with my test branch to double check that,
it'll take a few hours so I'll send an update later if I get
anything useful out of it.
Here's the result, there were a couple of failures but some were
due to infrastructure errors (nyan-big) and I'm not sure about
what was the problem with the meson boards:

https://staging.kernelci.org/boot/all/job/gtucker/branch/kernelci-local/kernel/next-20190304-1-g4f0b547b03da/

So there's no clear indicator that the shuffle config is causing
any issue on any other platform than the BeagleBone Black.

In the meantime, I'm happy to try out other things with more
debug configs turned on or any potential fixes someone might
have.
ARM is the only arch that sets ARCH_HAS_HOLES_MEMORYMODEL to 'y'. Maybe the
failure has something to do with it...

Guillaume, can you try this patch:
Mike, I appreciate the help!


Sure, it doesn't seem to be fixing the problem though:

https://lava.collabora.co.uk/scheduler/job/1527471

I've added the patch to the same branch based on next-20190304.

I guess this needs to be debugged a little further to see what
the panic really is about. I'll see if I can spend a bit more
time on it this week, unless there's any BeagleBone expert
available to help or if someone has another fix to try out.
Thanks for the help Guillaume!

I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?
Can someone send me a pointer to the series in question ? I would like
to run it through my testbed.

Thanks,
Guenter

Thanks!

--
Kees Cook



Re: next/master boot bisection: next-20190215 on beaglebone-black

Kees Cook <keescook@...>
 

On Thu, Mar 7, 2019 at 7:43 AM Dan Williams <dan.j.williams@intel.com> wrote:

On Thu, Mar 7, 2019 at 1:17 AM Guillaume Tucker
<guillaume.tucker@collabora.com> wrote:

On 06/03/2019 14:05, Mike Rapoport wrote:
On Wed, Mar 06, 2019 at 10:14:47AM +0000, Guillaume Tucker wrote:
On 01/03/2019 23:23, Dan Williams wrote:
On Fri, Mar 1, 2019 at 1:05 PM Guillaume Tucker
<guillaume.tucker@collabora.com> wrote:

Is there an early-printk facility that can be turned on to see how far
we get in the boot?
Yes, I've done that now by enabling CONFIG_DEBUG_AM33XXUART1 and
earlyprintk in the command line. Here's the result, with the
commit cherry picked on top of next-20190304:

https://lava.collabora.co.uk/scheduler/job/1526326

[ 1.379522] ti-sysc 4804a000.target-module: sysc_flags 00000222 != 00000022
[ 1.396718] Unable to handle kernel paging request at virtual address 77bb4003
[ 1.404203] pgd = (ptrval)
[ 1.406971] [77bb4003] *pgd=00000000
[ 1.410650] Internal error: Oops: 5 [#1] ARM
[...]
[ 1.672310] [<c07051a0>] (clk_hw_create_clk.part.21) from [<c06fea34>] (devm_clk_get+0x4c/0x80)
[ 1.681232] [<c06fea34>] (devm_clk_get) from [<c064253c>] (sysc_probe+0x28c/0xde4)

It's always failing at that point in the code. Also when
enabling "debug" on the kernel command line, the issue goes
away (exact same binaries etc..):

https://lava.collabora.co.uk/scheduler/job/1526327

For the record, here's the branch I've been using:

https://gitlab.collabora.com/gtucker/linux/tree/beaglebone-black-next-20190304-debug

The board otherwise boots fine with next-20190304 (SMP=n), and
also with the patch applied but the shuffle configs set to n.

Were there any boot *successes* on ARM with shuffling enabled? I.e.
clues about what's different about the specific memory setup for
beagle-bone-black.
Looking at the KernelCI results from next-20190215, it looks like
only the BeagleBone Black with SMP=n failed to boot:

https://kernelci.org/boot/all/job/next/branch/master/kernel/next-20190215/

Of course that's not all the ARM boards that exist out there, but
it's a fairly large coverage already.

As the kernel panic always seems to originate in ti-sysc.c,
there's a chance it's only visible on that platform... I'm doing
a KernelCI run now with my test branch to double check that,
it'll take a few hours so I'll send an update later if I get
anything useful out of it.
Here's the result, there were a couple of failures but some were
due to infrastructure errors (nyan-big) and I'm not sure about
what was the problem with the meson boards:

https://staging.kernelci.org/boot/all/job/gtucker/branch/kernelci-local/kernel/next-20190304-1-g4f0b547b03da/

So there's no clear indicator that the shuffle config is causing
any issue on any other platform than the BeagleBone Black.

In the meantime, I'm happy to try out other things with more
debug configs turned on or any potential fixes someone might
have.
ARM is the only arch that sets ARCH_HAS_HOLES_MEMORYMODEL to 'y'. Maybe the
failure has something to do with it...

Guillaume, can you try this patch:
Mike, I appreciate the help!


Sure, it doesn't seem to be fixing the problem though:

https://lava.collabora.co.uk/scheduler/job/1527471

I've added the patch to the same branch based on next-20190304.

I guess this needs to be debugged a little further to see what
the panic really is about. I'll see if I can spend a bit more
time on it this week, unless there's any BeagleBone expert
available to help or if someone has another fix to try out.
Thanks for the help Guillaume!

I went ahead and acquired one of these boards to see if I can can
debug this locally.
Hi! Any progress on this? Might it be possible to unblock this series
for v5.2 by adding a temporary "not on ARM" flag?

Thanks!

--
Kees Cook


Re: [Automated-testing] A common place for CI results?

Veronika Kabatova <vkabatov@...>
 

----- Original Message -----
From: "Kevin Hilman" <khilman@baylibre.com>
To: "Veronika Kabatova" <vkabatov@redhat.com>
Cc: "Guenter Roeck" <groeck@google.com>, "Timothy Bird" <Tim.Bird@sony.com>, "info" <info@kernelci.org>,
automated-testing@yoctoproject.org, kernelci@groups.io
Sent: Wednesday, April 10, 2019 11:13:40 PM
Subject: Re: [Automated-testing] A common place for CI results?

On Wed, Apr 10, 2019 at 10:47 AM Veronika Kabatova <vkabatov@redhat.com>
wrote:

[...]

Is there any page with details on how to join and what are the requirements
on us that I can pass along to management to get an official statement?
Attatched is the LF slide deck with the project overview, membership
levels and costs etc.

I'd be happy to discuss more with you on a call after you review the
deck, but it would have to be next week as I'm OoO for the rest of
this week.
Sounds good. Feel free to reach out off list to set up the time and call
location. I'll prepare a list of questions to discuss (especially as I have
no idea how the project memberships work, even though Red Hat is already a
member of Linux Foundation). Afterwards I can pass along all the information
and try to get any funding.


Thanks,
Veronika


Thanks,

Kevin


Re: [Automated-testing] A common place for CI results?

Kevin Hilman
 

On Wed, Apr 10, 2019 at 10:47 AM Veronika Kabatova <vkabatov@redhat.com> wrote:

[...]

Is there any page with details on how to join and what are the requirements
on us that I can pass along to management to get an official statement?
Attatched is the LF slide deck with the project overview, membership
levels and costs etc.

I'd be happy to discuss more with you on a call after you review the
deck, but it would have to be next week as I'm OoO for the rest of
this week.

Thanks,

Kevin


Re: A common place for CI results?

Veronika Kabatova <vkabatov@...>
 

----- Original Message -----
From: "Guenter Roeck" <groeck@google.com>
To: kernelci@groups.io, "Timothy Bird" <Tim.Bird@sony.com>
Cc: vkabatov@redhat.com, automated-testing@yoctoproject.org, info@kernelci.org
Sent: Tuesday, April 9, 2019 3:41:24 PM
Subject: Re: A common place for CI results?

On Mon, Apr 8, 2019 at 10:48 PM <Tim.Bird@sony.com> wrote:

-----Original Message-----
From: Veronika Kabatova
...
as we know from this list, there's plenty CI systems doing some testing
on
the
upstream kernels (and maybe some others we don't know about).

It would be great if there was a single common place where all the CI
systems
can put their results. This would make it much easier for the kernel
maintainers and developers to see testing status since they only need to
check one place instead of having a list of sites/mailing lists where
each CI
posts their contributions.
We've had discussions about this, and decided there are a few issues.
Some of these you identify below.


A few weeks ago, with some people we've been talking about kernelci.org
being
in a good place to act as the central upstream kernel CI piece that most
maintainers already know about. So I'm wondering if it would be possible
for
kernelci to also act as an aggregator of all results?
Right now, the kernelCI central server is (to my knowledge) maintained
by Kevin Hilman, on his own dime. That may be changing with the Linux
Foundation possibly creating a testing project to provide support for this.
But in any event, at the scale we're talking about (with lots of test
frameworks
and potentially thousands of boards and hundreds of thousands of test
run results arriving daily), hosting this is costly. So there's a question
of
who pays for this.
In theory that would be the Linux Foundation as part of the KernelCI
project. Unfortunately, while companies and people do show interest in
KernelCI, there seems to be little interest in actually joining the
project. My understanding is that the Linux Foundation will only make
it official if/when there are five members. Currently there are three,
Google being one of them. Any company interested in the project may
possibly want to consider joining it. When doing so, you'l have
influence setting its direction, and that may include hosting test
results other than those from KernelCI itself.
Is there any page with details on how to join and what are the requirements
on us that I can pass along to management to get an official statement?

We are definitely interested in more involvement with upstream, both kernel
and different CI systems, as we have a common goal. If we can help each other
out and build a central CI system for upstream kernels that people can rely
on, we want to be a part of this effort. We have started our own interaction
with upstream (see my intro email on this list) but as all CI systems face
the same challenges it only makes sense to join the forces.

Guenter

There's already an API
for publishing a report [0] so it shouldn't be too hard to adjust it to
handle and show more information. I also found the beta version for test
results [1] so actually, most of the needed functionality seems to be
already
there. Since there will be multiple CI systems, the source and contact
point
for the contributor (so maintainers know whom to ask about results if
needed)
would likely be the only missing essential data point.
One of the things on our action item list is to have discussions about a
common results format.
See https://elinux.org/ATS_2018_Minutes (towards the end right before
"Decisions from the summit")
I think this addresses the issue of what information is needed for a
universal results format.
I think we should definitely add a 'contributor' field to a common
definition, for the
reasons you mention.

Some other issues, are making it so that different test frameworks emit the
same testcase
names when they run the same test. For example, in Fuego there is a
testcase called
Functional.LTP.syscalls.abort07. It's not required, but it seems like it
would be valuable
if CKI, Linaro, Fuego and others decided on a canonical name for this
particular testcase,
so they were the same in each run result.
Good point. CKI only reports full testsuite names as results (so it would be
"LTP lite") and then we have a short log with subtests and results, and a
longer log with details. But for CI systems that report each subtest
separately, having a common name (with maybe "LTP" as an aggregated result)
would definitely be beneficial and easier to parse by both humans and
automation.

I took an action item from our meetings at Linaro last week to look at this
issue
(testcase name harmonization).


The common place for results would also make it easier for new CI systems
to
get involved with upstream. There are likely other companies out there
running
some tests on kernel internally but don't publish the results anywhere.
Only
adding some API calls into their code (with the data they are allowed to
publish) would make it very simple for them to start contributing. If we
want
to make them interested, the starting point needs to be trivial.
Different
companies have different setups and policies and they might not be able
to
fulfill arbitrary requirements so they opt to not get involved at all,
which
is a shame because their results can be useful. After the initial
"onboarding"
step they might be willing to contribute more and more too.
Indeed. Probably most groups don't publish their test results, even
when they are using open source tests. There are lots of reasons for this
(including there not being a place to publish them, as you mention).
It would be good to also address the other reasons that testing entities
don't publish, and try to remove as many obstacles (or to try to encourage
as much as possible) publishing of test results.
Absolutely agreed.

Please let me know if the idea makes sense or if something similar is
already
in plans. I'd be happy to contribute to the effort because I believe it
would
make everyone's life easier and we'd all benefit from it (and maybe
someone
else from my team would be willing to help out too if needed).
I think it makes a lot of sense, and we'd like to take steps to make that
possible.

The aspect of this that I plan to work on myself is testcase name
harmonization.
That's one aspect of standardizing a common or universal results format.
But I've already got a lot of things I'm working on. If someone else wants
to
volunteer to work on this, or head up a workgroup to work on this, let me
know.
Totally understand your situation, too much work and too little time :) I can
try to put an idea together and post it here for feedback to help out. Do you
have any data points or previous discussions to link? It would be great to
have something to build upon instead of posting a brain dump that won't work
for already known issues (that aren't known by me).



Veronika

Regards,
-- Tim




Re: please add android-4.X-q common kernels to kernelci

Matt Hart
 

On Tue, 9 Apr 2019 at 06:48, 'Todd Kjos' via info <info@kernelci.org> wrote:

We just created the "dessert" kernels for the Android Q release, please add them to
kernelci testing:

repo: https://android.googlesource.com/kernel/common
branches:
android-4.9-q
android-4.14-q
android-4.19-q

These will get updated weekly with LTS merges but have very little action otherwise.
I'll get these added. If there's any concern about build capacity I
will add GCE builders as required.


Thanks.

-Todd


Re: [Automated-testing] A common place for CI results?

Mark Brown <broonie@...>
 

On Tue, Apr 09, 2019 at 06:41:24AM -0700, Guenter Roeck wrote:
On Mon, Apr 8, 2019 at 10:48 PM <Tim.Bird@sony.com> wrote:
Right now, the kernelCI central server is (to my knowledge) maintained
by Kevin Hilman, on his own dime. That may be changing with the Linux
Linaro is paying for the core servers (the Hetzner boxes with the core
servers are a combination of Linaro and Collabora, IIRC the boxes
Collabora is paying for are all builders). As far as I'm aware no
individual is paying out of pocket for anything except for labs at the
minute.


Re: A common place for CI results?

Guenter Roeck
 

On Mon, Apr 8, 2019 at 10:48 PM <Tim.Bird@sony.com> wrote:

-----Original Message-----
From: Veronika Kabatova
...
as we know from this list, there's plenty CI systems doing some testing on
the
upstream kernels (and maybe some others we don't know about).

It would be great if there was a single common place where all the CI systems
can put their results. This would make it much easier for the kernel
maintainers and developers to see testing status since they only need to
check one place instead of having a list of sites/mailing lists where each CI
posts their contributions.
We've had discussions about this, and decided there are a few issues.
Some of these you identify below.


A few weeks ago, with some people we've been talking about kernelci.org
being
in a good place to act as the central upstream kernel CI piece that most
maintainers already know about. So I'm wondering if it would be possible for
kernelci to also act as an aggregator of all results?
Right now, the kernelCI central server is (to my knowledge) maintained
by Kevin Hilman, on his own dime. That may be changing with the Linux
Foundation possibly creating a testing project to provide support for this.
But in any event, at the scale we're talking about (with lots of test frameworks
and potentially thousands of boards and hundreds of thousands of test
run results arriving daily), hosting this is costly. So there's a question of
who pays for this.
In theory that would be the Linux Foundation as part of the KernelCI
project. Unfortunately, while companies and people do show interest in
KernelCI, there seems to be little interest in actually joining the
project. My understanding is that the Linux Foundation will only make
it official if/when there are five members. Currently there are three,
Google being one of them. Any company interested in the project may
possibly want to consider joining it. When doing so, you'l have
influence setting its direction, and that may include hosting test
results other than those from KernelCI itself.

Guenter

There's already an API
for publishing a report [0] so it shouldn't be too hard to adjust it to
handle and show more information. I also found the beta version for test
results [1] so actually, most of the needed functionality seems to be already
there. Since there will be multiple CI systems, the source and contact point
for the contributor (so maintainers know whom to ask about results if
needed)
would likely be the only missing essential data point.
One of the things on our action item list is to have discussions about a common results format.
See https://elinux.org/ATS_2018_Minutes (towards the end right before "Decisions from the summit")
I think this addresses the issue of what information is needed for a universal results format.
I think we should definitely add a 'contributor' field to a common definition, for the
reasons you mention.

Some other issues, are making it so that different test frameworks emit the same testcase
names when they run the same test. For example, in Fuego there is a testcase called
Functional.LTP.syscalls.abort07. It's not required, but it seems like it would be valuable
if CKI, Linaro, Fuego and others decided on a canonical name for this particular testcase,
so they were the same in each run result.

I took an action item from our meetings at Linaro last week to look at this issue
(testcase name harmonization).


The common place for results would also make it easier for new CI systems
to
get involved with upstream. There are likely other companies out there
running
some tests on kernel internally but don't publish the results anywhere.
Only
adding some API calls into their code (with the data they are allowed to
publish) would make it very simple for them to start contributing. If we want
to make them interested, the starting point needs to be trivial. Different
companies have different setups and policies and they might not be able to
fulfill arbitrary requirements so they opt to not get involved at all, which
is a shame because their results can be useful. After the initial "onboarding"
step they might be willing to contribute more and more too.
Indeed. Probably most groups don't publish their test results, even
when they are using open source tests. There are lots of reasons for this
(including there not being a place to publish them, as you mention).
It would be good to also address the other reasons that testing entities
don't publish, and try to remove as many obstacles (or to try to encourage
as much as possible) publishing of test results.

Please let me know if the idea makes sense or if something similar is already
in plans. I'd be happy to contribute to the effort because I believe it would
make everyone's life easier and we'd all benefit from it (and maybe
someone
else from my team would be willing to help out too if needed).
I think it makes a lot of sense, and we'd like to take steps to make that possible.

The aspect of this that I plan to work on myself is testcase name harmonization.
That's one aspect of standardizing a common or universal results format.
But I've already got a lot of things I'm working on. If someone else wants to
volunteer to work on this, or head up a workgroup to work on this, let me know.

Regards,
-- Tim




Re: A common place for CI results?

Tim.Bird@...
 

-----Original Message-----
From: Veronika Kabatova
...
as we know from this list, there's plenty CI systems doing some testing on
the
upstream kernels (and maybe some others we don't know about).

It would be great if there was a single common place where all the CI systems
can put their results. This would make it much easier for the kernel
maintainers and developers to see testing status since they only need to
check one place instead of having a list of sites/mailing lists where each CI
posts their contributions.
We've had discussions about this, and decided there are a few issues.
Some of these you identify below.


A few weeks ago, with some people we've been talking about kernelci.org
being
in a good place to act as the central upstream kernel CI piece that most
maintainers already know about. So I'm wondering if it would be possible for
kernelci to also act as an aggregator of all results?
Right now, the kernelCI central server is (to my knowledge) maintained
by Kevin Hilman, on his own dime. That may be changing with the Linux
Foundation possibly creating a testing project to provide support for this.
But in any event, at the scale we're talking about (with lots of test frameworks
and potentially thousands of boards and hundreds of thousands of test
run results arriving daily), hosting this is costly. So there's a question of
who pays for this.

There's already an API
for publishing a report [0] so it shouldn't be too hard to adjust it to
handle and show more information. I also found the beta version for test
results [1] so actually, most of the needed functionality seems to be already
there. Since there will be multiple CI systems, the source and contact point
for the contributor (so maintainers know whom to ask about results if
needed)
would likely be the only missing essential data point.
One of the things on our action item list is to have discussions about a common results format.
See https://elinux.org/ATS_2018_Minutes (towards the end right before "Decisions from the summit")
I think this addresses the issue of what information is needed for a universal results format.
I think we should definitely add a 'contributor' field to a common definition, for the
reasons you mention.

Some other issues, are making it so that different test frameworks emit the same testcase
names when they run the same test. For example, in Fuego there is a testcase called
Functional.LTP.syscalls.abort07. It's not required, but it seems like it would be valuable
if CKI, Linaro, Fuego and others decided on a canonical name for this particular testcase,
so they were the same in each run result.

I took an action item from our meetings at Linaro last week to look at this issue
(testcase name harmonization).


The common place for results would also make it easier for new CI systems
to
get involved with upstream. There are likely other companies out there
running
some tests on kernel internally but don't publish the results anywhere.
Only
adding some API calls into their code (with the data they are allowed to
publish) would make it very simple for them to start contributing. If we want
to make them interested, the starting point needs to be trivial. Different
companies have different setups and policies and they might not be able to
fulfill arbitrary requirements so they opt to not get involved at all, which
is a shame because their results can be useful. After the initial "onboarding"
step they might be willing to contribute more and more too.
Indeed. Probably most groups don't publish their test results, even
when they are using open source tests. There are lots of reasons for this
(including there not being a place to publish them, as you mention).
It would be good to also address the other reasons that testing entities
don't publish, and try to remove as many obstacles (or to try to encourage
as much as possible) publishing of test results.

Please let me know if the idea makes sense or if something similar is already
in plans. I'd be happy to contribute to the effort because I believe it would
make everyone's life easier and we'd all benefit from it (and maybe
someone
else from my team would be willing to help out too if needed).
I think it makes a lot of sense, and we'd like to take steps to make that possible.

The aspect of this that I plan to work on myself is testcase name harmonization.
That's one aspect of standardizing a common or universal results format.
But I've already got a lot of things I'm working on. If someone else wants to
volunteer to work on this, or head up a workgroup to work on this, let me know.

Regards,
-- Tim


please add android-4.X-q common kernels to kernelci

'Todd Kjos' via info <info@...>
 

We just created the "dessert" kernels for the Android Q release, please add them to
kernelci testing:

repo: https://android.googlesource.com/kernel/common
branches:
  android-4.9-q
  android-4.14-q
  android-4.19-q

These will get updated weekly with LTS merges but have very little action otherwise.

Thanks.

-Todd


A common place for CI results?

Veronika Kabatova <vkabatov@...>
 

Hi,

as we know from this list, there's plenty CI systems doing some testing on the
upstream kernels (and maybe some others we don't know about).

It would be great if there was a single common place where all the CI systems
can put their results. This would make it much easier for the kernel
maintainers and developers to see testing status since they only need to
check one place instead of having a list of sites/mailing lists where each CI
posts their contributions.


A few weeks ago, with some people we've been talking about kernelci.org being
in a good place to act as the central upstream kernel CI piece that most
maintainers already know about. So I'm wondering if it would be possible for
kernelci to also act as an aggregator of all results? There's already an API
for publishing a report [0] so it shouldn't be too hard to adjust it to
handle and show more information. I also found the beta version for test
results [1] so actually, most of the needed functionality seems to be already
there. Since there will be multiple CI systems, the source and contact point
for the contributor (so maintainers know whom to ask about results if needed)
would likely be the only missing essential data point.


The common place for results would also make it easier for new CI systems to
get involved with upstream. There are likely other companies out there running
some tests on kernel internally but don't publish the results anywhere. Only
adding some API calls into their code (with the data they are allowed to
publish) would make it very simple for them to start contributing. If we want
to make them interested, the starting point needs to be trivial. Different
companies have different setups and policies and they might not be able to
fulfill arbitrary requirements so they opt to not get involved at all, which
is a shame because their results can be useful. After the initial "onboarding"
step they might be willing to contribute more and more too.


Please let me know if the idea makes sense or if something similar is already
in plans. I'd be happy to contribute to the effort because I believe it would
make everyone's life easier and we'd all benefit from it (and maybe someone
else from my team would be willing to help out too if needed).


Thanks,

Veronika Kabatova
CKI Project


[0] https://api.kernelci.org/examples.html#sending-a-boot-report
[1] https://kernelci.org/test/


kernelci.org update - 2019-04-04

Guillaume Tucker
 

Summary of changes going into production today:

* test-configs: fix imx6ul-pico-hobbit using barebox

* kci_build: split into sub-commands and improve help messages

* buildroot: upgrade to 2018.11 and add MIPS variant

* stretch-igt: disable MIPS builds as currently broken in igt


On the staging and testing side of things, there's now a Linux
kernel tree in the kernelci Github project to hold branches to
test the kernelci.org infrastructure:

https://github.com/kernelci/linux

I'm gradually automating the steps required to run all the
pending changes on staging, see this wip repo:

https://gitlab.collabora.com/gtucker/kernelci-staging-tools


Guillaume


Re: New meeting time

Mark Brown
 

On Mon, 25 Mar 2019 at 18:28, Matt Hart <matthew.hart@...> wrote:
On Mon, 25 Mar 2019 at 11:23, Mark Brown <broonie@...> wrote:
>
> Hi,
>
> It's been just over a week since I sent out the doodle[1] for the new
> meeting time and we've got responses in.  The top options we have allow
> everyone to attend (at least acording to the survey!):
>
>  10 - Monday  14:00
>     - Tuesday 16:00
>     - Friday  15:00 (I will have a conflict every other week)
>   9 - Monday  15:00 (but no Guillaume)
>
> with a bunch of other options with 8 votes.  I don't know what my
> schedule is going to be like in a month or so but I'm guessing the other
> slots should be fine for me.  Given that practically speaking it's
> difficult for Kevin to attend the Monday 14:00 slot I propose that we go
> with the Tuesday 16:00 slot since that's the latest one and therefore
> most friendly to those on PST.  Thoughts?

Makes sense to me, as long as it definitely works for our PST folks

It’s a bit late to move this week’s meeting but I’ll move next week’s given the universal acclaim. Everyone from Linaro is at Connect this week so attendance is likely to be light. 



>
> Thanks,
> Mark
>
> [1] https://doodle.com/poll/3yb3mzgxt7nqwg87
>
>
>




Re: Email reports from staging and automated staging tests

Kevin Hilman
 

"Guillaume Tucker" <guillaume.tucker@gmail.com> writes:

Hello,

I've revived the main jobs on staging.kernelci.org to build and
test the kernelci-core/staging.kernelci.org branch with the
intent to automatically update it every day with all the open
PRs. It has a series of extra patches to make it appropriate on
staging, with a separate tree and bisection reports disabled to
avoid spamming people. The history can be found here:

https://github.com/kernelci/kernelci-core/commits/staging.kernelci.org

I've initially put only mgalka and myself on the list of
recipients for these jobs, but I think it would make sense to
have all the contributors whose code gets tested on staging. So
please let me know if you wish to be added or not added to the
list.
Yes please.

I've started a tool to gather all the PRs for a particular Github
project into a "staging" branch with an extra series of patches
to apply on top of it:

https://gitlab.collabora.com/gtucker/kernelci-staging-tools

I'll add a README and if this looks fine then it should be moved
to Github. Also we should probably have a Linux kernel tree for
KernelCI to have a branch to test on staging,
Agreed.

as at the moment it's using my own tree.

The next bit that needs to be automated is to push the staging
branches to the Github repos. For this I think we'll need to
create a Github user with some dedicated SSH keys. The script
can be used for any project (kernelci-core, kernelci-backend,
kernelci-frontend...). We would also need an extra user to
deploy the backend and frontend branches over SSH on the staging
server. After that, it could add a change to a test kernel
branch and trigger a kernel-tree-monitor job to get the whole
thing running. That could be run as a daily Jenkins job.
CI for kernelCI. Nice!

Kevin

961 - 980 of 1317