Date   

KernelCI local instance Setup

neelima@...
 

Hey!

I am trying to setup a local instance of kernelci in my lab to trigger some automated Fuzzing on different kernel drivers. I started with a local instance with end to end kernelci setup to figure out the logistics. Following are the BKMs I looked at:
1. The Exceptional talk by Mikal Galka on Bootstrapping a local instance with dockers: https://elinux.org/images/8/86/Bootstraping_Local_KernelCI.pdf
2. Then I also setup the kernelci backend and front end starting with kernelci-backend-config using ansible: https://github.com/kernelci/kernelci-backend-config/blob/main/INSTALL.md

I was successful with setting up the backend and frontend in both cases. The obvious step was to push a locally built kernel to the front end:https://github.com/kernelci/kernelci-core/blob/main/doc/kci_build.md
Inspite of seeing no errors, I am unable to see the build in the front end. I did try the kernelci-docker repos from lucj and official kernelci github repos.

./kci_build push_kernel --kdir=linux --api=http://127.0.0.1:8081 --db-token 501f4555-4627-4038-bc07-493a4d2d974e
Loading config/core/rootfs-configs.yaml
Loading config/core/test-configs.yaml
Loading config/core/lab-configs.yaml
Loading config/core/db-configs.yaml
Loading config/core/build-configs.yaml
Upload path: next/master/next-20210513/x86_64/defconfig/gcc-8

I don't think its a proxy issue as I have tried it outside my corporate network too. http://localhost:8080/build/ and http://localhost:8080/jobs/ shows "No data available".

Pertinent logs: Attaching docker-compose logs.

I am pretty confident I am overlooking something obvious. I have been staring at this for long, perhaps someone else in the community could point it out?!

Thank you for your time in advance.


Re: Chrome OS and KernelCI

Jesse Barnes <jsbarnes@...>
 

On Thu, Apr 29, 2021 at 11:35 PM Guillaume Tucker
<guillaume.tucker@collabora.com> wrote:

As you all know, KernelCI is dedicated to testing the upstream kernel. This is
what the results on linux.kernelci.org are all about. Still, there are many
products out there running Linux with many downstream changes, such as distros
for desktop and enterprise applications, mobile devices etc... Several of the
KernelCI Linux Foundation members make Linux-based products, and testing
upstream is valuable to them because every issue caught there is something they
won't have to fix in their product. In fact, it brings products closer to
upstream kernels over time.

One such member is Google, with Chrome OS products. There are an increasing
number of Chromebook devices in KernelCI (mostly in Collabora's lab) which can
be used to test that upstream kernels and in particular stable ones are working
well on this hardware. They're currently running all the regular tests like
other platforms: LTP, kselftest, igt, v4l2-compliance... Now, as a member
company, Google would like to extend coverage with additional tests that are
only currently available within Chrome OS. This is exploring a new avenue for
KernelCI, and it's important to ensure all aspects and decisions are well
discussed openly with the community.
[wow this turned into a big reply; short version is I think we have
some really valuable tests that people will want to see results from,
and I definitely think we shouldn't limit ourselves to traditional
distros]

Thanks Guillaume for starting this discussion; summarizing your points
as I understand them:
1) non-traditional userspace required
2) some higher level tests
3) may be harder to reproduce by some developers
4) higher level tests may be harder to isolate as kernel issues

I think 1 and 3 are related since they both create barriers to
developers running traditional distros to reproduce problems.
Similarly, if a problem only occurs on a Chromebook, developers may
have a hard time reproducing (I expect this to happen in the audio
stack sometimes for example). Some alternatives we've talked about
include containerizing the tests instead of having a full rootfs; this
may allow external developers to simply pull an image and run it,
simplifying that aspect of testing. That introduces its own issues
though in terms of device access and fidelity to the original test
intent, so would require some effort to get right (but might be a good
thing to invest in for the long term).

2 and 4 are also related, and are real challenges. That said, I don't
think we should give up easily here! One of my big motivations for
pushing our tests into KernelCI is to provide upstream developers with
test results they can't get today. In particular, I want to push at
least a couple of categories of test:
- desktop style memory pressure tests
- tests that measure input latency in web apps while running mixed
workloads including video conferencing

In both cases, the results of our tests report user visible metrics
like tab switch times and input latency as seen by the user (e.g.
keypress to glyph on screen), in addition to several more common
metrics like CPU usage, framerate, etc. I think several upstream
developers are interested in this type of feedback, and I think they
close a gap in upstream testing we've had for some time. If one of
these tests regress, it will involve some work to figure out, and
there's definitely a statistical element to the results, but if we
keep the userspace and test images fixed while only changing the
kernel, I think we can be fairly confident that about kernel
responsibility if enough runs show a change (though clearly there are
exceptions here).

One obvious advantage of having these high level tests is that they
represent common workloads and measure user visible behavior, and big
regressions especially would be something the broader community would
want to avoid, if only for selfish reasons! And over time, we could
add additional tests that correlate to higher level behaviors but that
are easier to reproduce and debug (though for mm and scheduling in
particular, I don't think there's a good substitute for comprehensive
high level tests).

I think if we can figure out an approach on the
userspace/reproducibility issues, we have an opportunity to make Linux
a lot better for the wide variety of userspace stacks out there; imo
we shouldn't limit ourselves to traditional distros, especially as
things have evolved so far from those early days (e.g. packages to
congtainers & VMs, moving away from X, development of new approaches
like io_uring with implications for app & distro architecture, etc).

And I think the high level tests are just a starting point; I'm hoping
KernelCI over time ingests a large body of tests, from low level to
high level, providing the broader community with a variety of metrics
to make releases even higher quality than they are today.

Thanks,
Jesse


Chrome OS and KernelCI

Guillaume Tucker
 

As you all know, KernelCI is dedicated to testing the upstream kernel. This is
what the results on linux.kernelci.org are all about. Still, there are many
products out there running Linux with many downstream changes, such as distros
for desktop and enterprise applications, mobile devices etc... Several of the
KernelCI Linux Foundation members make Linux-based products, and testing
upstream is valuable to them because every issue caught there is something they
won't have to fix in their product. In fact, it brings products closer to
upstream kernels over time.

One such member is Google, with Chrome OS products. There are an increasing
number of Chromebook devices in KernelCI (mostly in Collabora's lab) which can
be used to test that upstream kernels and in particular stable ones are working
well on this hardware. They're currently running all the regular tests like
other platforms: LTP, kselftest, igt, v4l2-compliance... Now, as a member
company, Google would like to extend coverage with additional tests that are
only currently available within Chrome OS. This is exploring a new avenue for
KernelCI, and it's important to ensure all aspects and decisions are well
discussed openly with the community.

Of course, any test that can find a kernel issue is useful. However, in the
case of Chrome OS tests:

* They can typically only be run within a Chrome OS user-space

This is due to the dependencies on libraries and services that only exist in
Chrome OS. Those tests can in theory be made more portable, but not in a
trivial way. It makes it harder for any kernel developer to reproduce a test
than say with KUnit, kselftest or LTP etc... Even if the user-space is
directly available for everyone to use locally, it is still an extra hurdle.

* Some may be higher-level workloads than bare metal kernel ones

When a kernel panics, we know it's a kernel problem. When a video stream is
not playing correctly in a Chrome web browser, or if performance has dropped,
it can be harder to directly blame the kernel even if it's the only moving part
between CI runs. For example, it may be due to a sub-optimal user-space
implementation made visible only as a side-effect of some kernel changes. So
when reporting Chrome OS test errors, it can be tricky to confidently point the
finger at some kernel patch.

* The issues they find may not be detected by any other kind of tests

If a Chrome OS test finds an issue, for example a benchmark drop, but generic
benchmarks don't, then it's tempting to say the issue is specific to Chrome OS.
When reporting this issue to a random kernel developer not working on Chrome OS
products, it may be harder to convince them they've broken something than with
well-known generic tests. Each issue is different, and maybe some will be
obviously due to the kernel in which case it's all good. But there is a
possibility that reporting unclear issues too quickly can have a negative
effect in the community for both Chrome OS and KernelCI.


It would be interesting to know what others think of this, if the issues
highlighted above seem like they would set a precedent that would cause
KernelCI to deviate from its intended purpose.


In terms of addressing these issues, one option is to create a separate
KernelCI instance hosted on chromeos.kernelci.org. We could then have extra
kernel builds for Chromebooks (e.g. config fragments), dedicated Chrome OS
rootfs images to run Chrome OS tests and email reports sent only to Chrome OS
developers if we want to. This would give us a stepping stone to try things
out without interfering with linux.kernelci.org. Whenever some tests are
deemed acceptable, they could be migrated to the main linux.kernelci.org
instance. It would still be public and building only upstream / stable
kernels, but with a focus on Chrome OS testing. All the results would also be
sent to the common reporting database KCIDB.

Having a dedicated instance is something every KernelCI LF project member can
benefit from, and this would be the first one. So decisions around how it gets
done will set some precedent for other members.


Does anyone have any concerns about this? Or, on the contrary would it seem
appropriate to directly enable tests with a Chrome OS user-space on the main
linux.kernelci.org instance and report issues in the same way as any generic
test results? Please share any thoughts you may have. It's important to
ensure we find ways for member companies to fully benefit from KernelCI while
also serving the wider kernel community in the best possible way.

Best wishes,
Guillaume


Re: RFC: building a regression tracking bot for Linux kernel development

Guillaume Tucker
 

+kernelci +automated-testing

Hi Thorsten,

On 22/04/2021 08:16, Thorsten Leemhuis wrote:
Lo! As mentioned a few times recently I'm staring to build a bot for
semi-automatic Linux kernel regressions tracking. Find below a rough
description of how I imagine it's going to work. That way I want to give
everyone a chance to influence things before I'm starting to code for
real. Early feedback will help to build something that's acceptable for
the Linux kernel developer community and used in practice in the long
run, and that's what I aim for.

I know, I know, "Talk is cheap. Show me the code.". But I had to think
things through and write some of it down anyway, so no harm done in
posting it as RFC. I CCed ksummit, as many maintainers hang out there
and because this is a follow up to my former regression tracking work we
discussed on both kernel and maintainers summit 2017; it fact it
hopefully might be something for this year as well, we'll see, too early
to tell.
This sounds great, with a simple email-based interface and a well
defined scope for the bot's role.

There are a few things that are worth mentioning from a KernelCI
point of view though, to ensure both tools work together and not
against each other (see https://kernelci.org).

The first thing is that KernelCI detects and tracks a fair amount
of regressions all the time, then runs automated bisections to
find the breaking commits. This currently leads to between 1 and
10 unique "bug reports" per week. So including "regzbot" in
those reports would initially seem simple and very effective.

Then another aspect to take into account is the proliferation of
tools. KernelCI's mission is not only to run tests but also to
gather results from other test systems into a common database
called KCIDB. The main goal is to provide a full picture of all
the issues in one place, with unified email reports and a web
dashboard.

Tracking regressions is on the roadmap for KCIDB, although it's
not yet entirely decided how it will actually work. Ideally it
would simply let systems submit their own regression data to
KCIDB, which sounds very similar to what regzbot would be doing.

I can think of several ways to orchestrate these things together.
In a nutshell, this is what I believe to be the best way around:

* automated test systems submit regressions to KCIDB, just like
some of them already do with build and test data (syzbot, Red
Hat's CKI, linux.kernelci.org, tuxsuite...)

* regzbot provides a way for reporting regressions by hand via
email, and forwards them automatically to KCIDB too

Essentially, regzbot would remain autonomous but also act as an
email-based interface to submit data to KCIDB. This gives you a
web dashboard and a common place to gather all regressions "for
free" among other things on kcidb.kernelci.org (still early
days). You may also generate some regzbot dedicated web pages
elsewhere if needed in practice, both could co-exist.

How does that sound?

Another hypothetical scenario would be if regzbot was the
unifying tool, and all automated test systems sent their
regression data to it. But then, KCIDB would become either
redundant or starved of the regression data it needs to
complement its test results. So this doesn't seem so good.


I know it's important to have tools that work "by hand" for
developers, but automated testing leads to a better life!

Best wishes,
Guillaume

So how will the "regzbot" work? The ideal case is simple:

Someone reports a regression to the recently created regressions mailing
list(regressions@lists.linux.dev). There the user includes a tag like this:
#regzb introduced: 94a632d91ad1 ("usc: xhbi-foo: check bar_params earlier")
That will make regzbot add the report to its list of regressions it
tracks, which among other will make it store the mail's message-id
(let's assume it's `xt6uzpqtaqru6pmh@earth.solsystem`). Ideally some
developer within a few days will fix the regression with a patch. When
doing so, they already often include a tag linking to the report:
Link: https://lore.kernel.org/r/xt6uzpqtaqru6pmh@earth.solsystem

Regzbot will notice this tag refers to the regression it tracks and
automatically close the entry for it.

That's it already. The regression was tracked with:

* minimal overhead for the reporter
* no additional overhead for the developers – only something they ought
to do already became more important

Invisible ideally
-----------------

In the ideal case regzbot thus seems to be of no use. But obviously
things will be anything else than ideal quite often – for example when
nobody fixes the reported regression.

The webpages that Regzbot will generate (see below) will show this. They
among others are meant for Linus or Greg to check how things stand, so
they can simply fix a regression by reverting the causing commit if they
want to; in other situations they might decide to delay a release to get
crucial regressions solved.

And that's what regression tracking is about: providing a view into the
state of things with regards to regressions, as that's the important
thing missing in Linux kernel development right now.


That can't be all
-----------------

Of course the world is more complicated than the simple example scenario
above, as the devil is always in the details. The three most obvious
problems the initial ideal scenario left aside:

* The reporter doesn't specify the #regzb tag at all. Regzbot can't do
anything about it, it sadly won't have visionary power and a AI engine
any time soon. Some human (for a while that often will be me) thus needs
to reply with the tag with a proper reply-to to the report to make
regboz track it.

* The commit causing the regression is unknown to the reporter. In that
case the tag should mention the span when the regression was introduced:
#regzb introduced: v5.7..v5.8-rc1
* The developer who fixes the issue forgets to place the "Link:" tag,
which can't be added once committed. In that case some human needs to
reply to the thread with the initial report with a tag like this:
#regzb Fixed-by: c39667ddcfd5

How will it look
----------------

Here is a mockup on the website for the regzbotproject:
https://linux-regtracking.leemhuis.info/images/regzbot-mockup.png

You'll notice a few things:

* regressions for mainline kernel will be shown on a different page
than those in stable and longterm kernels, as they are handled by
different people.

* regressions where the culprit is known get the top spot, as the
change causing them can sometimes simply be reverted to fix the regression.

* the second spot is for regressions in the current cycle, as contrary
to those in previous release there is still time to fix those before the
next release.

* Regzbot will try to monitor the process between reporting and fixing
and provide links to lookup details. Regzbot will thus watch the thread
where the regression was reported and show when it noticed the last
activity; it will also look out for `#regszb Link:` and `Link:` tags in
patch submissions and linux-next. That way release managers can
immediately see if things stalled after the regression was reported; it
also allows them to see if developers are working on a fix and how far
it got in the machinery. If the causing commit is known, the webview
obviously will link to it as well.

* regressions where nothing happened for a while will be moved to the
"dormant" page, to prevent the status page from getting filled by
reports that obviously nobody cares about anymore. Reporters will be
told about this by mail to give them a chance to provide a fresh status
update to get things rolling again.


Even more problems in the details
---------------------------------

Regzbot on purpose will lack many features found in traditional bug
trackers: it's meant to be a simple tool acting in the background
without much overhead, as it doesn't want to become yet another bug
tracker. Nevertheless, it will need a few features they typically offer.
Those will be usable via tags that need to be dropped into mails send in
direct or indirect reply to the mail with the report:

* Mark a report as a duplicate of another or revert such a marking:
#regzb dup: https://lore.kernel.org/r/yt6uzpqtaqru6pmh@mars.solsystem
#regzb undup
* Mark a report as invalid.
#regzb invalid: Turned out it never worked

* generate a new title
#regzb new-title: Insert better description of the regression

* the initially mentioned tag can be used in replies to the report to
specify the commit causing the regression:
#regzb introduced: v5.7..v5.8-rc1

* Tell regzbot that a discussion is related to a tracked regression:
#regszb Link: https://lore.kernel.org/r/yt6uzpqtaqru6pmh@mars.solsystem
In the long run this is supposed to work in both directions, so you
can use it in a thread started by a regression report to link to some
other discussion or vice versa.


Implications and hidden aspects
-------------------------------

There are a few things of note:

* The plan for now is to not have a tag like `#regzb unfix`: in case it
turns out a commit did not fix a regression it's likely better to start
with a fresh report anyway. That forces someone to explain the current
state of things including the history clearly and straight forward; that
makes things a lot easier to follow for others in these situations and
thus is a good thing.

* regzbot works without a public unique-id, as it uses the URL of the
report instead and keeps any eye on is using the mail's message-id (say
20210406135151.xt6uzpqtaqru6pmh@earth.solsystem).

* regzbot won't be able to handle regressions reported to a mailing
list thread that is already tracked by regzbot, as it will assume all
mails in a thread are related to the earlier report. In that case the
reporter must be asked to start a new mailing list thread for the second
regression. But that's quite normal, as a similar approach is needed
when somebody reports an issue deep in a bug tracker ticket that was
crated for a totally different issue.

* Initially it won't be possible to track reports that are filed in bug
trackers; but this use-case will be kept in mind during the design to
make sure such a functionality can be added later easily.

* developer when fixing a regression with a bisected "#regzb
introduced:" tag can simply do `s/#regzb introduced:/Fixes:/` to get a
tag they are supposed to add.

* regression in stable and longterm kernels sometimes affect multiple
versions, for example if a patch that works fine in mainline was
backported to the longterm kernel 5.10 and 5.4 – but causes problems in
both, as something required by the patch is missing in those lines. How
this will be solved exactly remains to be seen, maybe like this:
#regzb Introduced: c39667ddcfd6 e39667ddcfd1 ("usc: xhbi-foo: check bar_params a little later again")
Then regzbot can look those commits up and from that determine the
affected versions. Obviously the reporter will likely not be aware of
it, hence it's likely that the stable maintainer or the developer need
to send a mail to make regzbot aware that this regression affects
multiple versions.

* Regzbot will need to be able to work with mails where mailers placed
a linebreak into the text that follows the #regzb tag. This will be
tricky, but is doable.

* to keep things simple there are neither authentication nor
restrictions for now, so anyone could mess things up by sending mails to
an open list and using those tags. If that against expectations turns
out to become a problem some restrictions will need to be put in place,
for example to allow changes only from email addresses that (1) are on
an allow list, (2) participated in the discussion or (3) have commits in
the kernel. People could still forge complete mails including "From",
but that's quite some work for not much to gain (except for messing
regression tracking up).


Implementation
--------------

The rough initial idea had been to reuse parts of the syzbot golang
source code, which already has an email interface similar to the one
regzbot needs. But the closer I looked, the more I came to the
conclusion that writing something in python is easier and better (even
if that means I need to bring my really rusty python skills up to
speed). That also has the benefit that python afaics is preferred by the
kernel.org admins, which would make it more attractive for them to host
the bot later.

The focus will be to properly establishing regression tracking with
regszbot first. All features not strictly needed will thus be left out
first to focus on what's most important. I'll also provide documentation
and will use the bot myself to track regressions as I did a few years
ago. Just like any other tracking solution it will always need some
hand-holding...

= EOF =

That's it. FWIW, this mail is slightly modified version of a text I
posted on the website for the regzbot project:
https://linux-regtracking.leemhuis.info/post/regzbot-approach/

Side note: that project and my work is funded by NGI pointer for one
year (see the website's about page for details). Follow-up funding won't
be possible from there, but hopefully by then I can find some other way
to keep things running and me in a position to look after regression
tracking.

Ciao, Thorsten


Sharing Compass CI results with KCIDB

Nikolai Kondrashov
 

Hi Fengguang,

Tim Bird has recommended I contact you regarding Compass CI, which you presented at Linaro Connect this year.

We at Linux Foundation's KernelCI project are developing a system for aggregating kernel testing reports into a single database, dashboard, and e-mail notification system for use by kernel maintainers, developers, and researchers. The system is called "KCIDB" and I presented it at the same Linaro Connect this year:

https://connect.linaro.org/resources/lvc21/lvc21-310/

We're already collecting data from six different kernel CI systems in our prototype database and dashboard:

https://kcidb.kernelci.org

We are working on reaching developers with our data next.

I was wondering if you're doing any testing of public kernel trees at Compass CI, and if so, whether you'd be interested in contributing testing results to KCIDB as well.

We hope to make it easier for developers to access all testing resultsm and to replace the multiple different e-mail reports and dashboards they're receiving and accessing, with a single one, thus saving them time, effort, and frustration. Having your data in the database, your requirements accounted for in our implementation, and hearing your ideas of how this could be done, would help us towards that goal.

Would you be interested in joining us and contributing your data to KCIDB?

Thank you.
Nick


#KCIDB engagement report #KCIDB

Nikolai Kondrashov <nkondras@...>
 

Hi everyone,

Below is the monthly report on KCIDB* engagement. It lists various CI systems
and their status of engagement with KCIDB, and once we get to that, will list
developer engagement.

Lines with updates are marked with "!".

Main news are KernelCI now sending all production data, which boosted the
average number of reports we're getting every day: over 50 revisions, some 10K
builds, and around 100K tests.

We're also getting closer to a new release and starting reaching developers
with e-mail notifications.

KernelCI native
! Sending (a lot of) production build and test results.
! https://staging.kernelci.org:3000/?var-origin=kernelci

Red Hat CKI
Sending production results.
https://staging.kernelci.org:3000/?var-origin=redhat

Google Syzbot
Sending a subset of production results (failures only).
https://staging.kernelci.org:3000/?var-origin=syzbot

ARM
Sending production results.
Full commit hashes are currently not available, are spoofed, and don't
match the ones reported by others. To be fixed soon.
https://staging.kernelci.org:3000/?var-origin=arm

Sony Fuego
Internal design in progress.

Gentoo GKernelCI
Sending production results.
Only builds (a few architectures), no configs, no logs, and no tests
for now, but working on growing contributions.
https://staging.kernelci.org:3000/?var-origin=gkernelci

Intel 0day
Initial conversation concluded, general interest expressed,
no contact since.

Linaro
Sending (a lot of) Tuxsuite build results to "production" KCIDB.
https://staging.kernelci.org:3000/?var-origin=tuxsuite

TuxML
Initial contact in response to a report.
There's a plan to approach us and start work in the coming months.

Yocto Project
Initial contact in response to a report.
Would like to start sending build and test results, particularly for
older kernels. Would like to separate upstream commits from project
patches first: https://bugzilla.yoctoproject.org/show_bug.cgi?id=14196

Please respond with corrections or suggestions of other CI systems to contact.

Nick

*KCIDB is an effort to unify Linux Kernel CI reporting, maintained by Linux
Foundation's KernelCI project:
https://foundation.kernelci.org/blog/2020/08/21/introducing-common-reporting/


Re: KernelCI backend redesign and generic lab support

Bjorn Andersson
 

On Fri 05 Mar 14:55 CST 2021, Guillaume Tucker wrote:

Hello,
Hi Guillaume,

Sorry for taking the time to give you some feedback on this.

As it has been mentioned multiple times recently, the
kernelci-backend code is ageing pretty badly: it's doing too
many things so it's hard to maintain, there are better ways to
implement a backend now with less code, and it's still Python
2.7. Also, there is a need to better support non-LAVA labs such
as Labgrid. Finally, in order to really implement a modular
KernelCI pipeline, we need a good messaging system to
orchestrate the different components - which is similar to
having a generic way to notify labs about tests to run. For all
these reasons, it's now time to seriously consider how we should
replace it with a better architecture.

I've gathered some ideas in this email regarding how we might go
about doing that. It seems like there are several people
motivated to help on different aspects of the work, so it would
be really great to organise this as a community development
effort.

Please feel free to share your thoughts about any of the points
below, and tell whether you're interested to take part in any of
it. If there appears to be enough interest, we should schedule
a meeting to kick-start this in a couple of weeks or so.


* Design ideas

* REST API to submit / retrieve data
* same idea as existing one but simplified implementation using jsonschema
* auth tokens but if possible using existing frameworks to simplify code

* interface to database
* same idea as now but with better models implementation

* pub/sub mechanism to coordinate pipeline with events
* new feature, framework to be decided (Cloud Events? Autobahn?)
* no logic in backend, only messages
* send notifications when things get added in database
My current approach for lab-bjorn is to poll the REST api from time to
time for builds that matches some search criteria relevant for my boards
and submit these builds to a RabbitMQ "topic" exchange. Then I have
individual jobs per board that consumes these builds, run tests and
submits test results in a queue, which finally is consumed by a thing
that reports back using the REST api.

The scraper in the beginning works, but replacing it with a subscriber
model would feel like a better design. Perhaps RabbitMQ is too low
level? But the model would be nice to have.



* Client side

Some features currently in kernelci-backend should be moved to client side
and rely on the pub/sub and API instead:

* LAVA callback handling (receive from LAVA, push via API)
* log parsing (subscribe to events, get log when notified, send results)
Since I moved to the REST api for reporting, instead of faking a LAVA
instance, I lost a few details - such as the LAVA parser generating html
logs. Nothing serious, but unifying the interface here would be good.

Regards,
Bjorn


Re: Kernel CI setup

Guillaume Tucker
 

Hi Neelima,

Please see my replies inline.

On 08/04/2021 00:32, Krishnan, Neelima wrote:

Hi,

 

Following up here… I was able to install kernelci frontend and backend locally. I commented off the Centos 7.3 nodejs Hack In roles/install-deps/tasks/main.yml And installed /libhttp-parser-dev/

In Debian buster VM manually. I am now able to see frontend running at http://127.0.0.1:5000/
Great!

This sounds like something that should be fixed in the Ansible
configuration, it should work on Debian without any changes.  In
fact I thought all references to other distros (e.g. CentOS) had
been dropped since only Debian is being tested and used on
kernelci.org.  Please feel free to send a PR to fix that.

I will send a PR to update the documentation with some gaps I see.
Thank you.

I want to integrate the Jenkins and Lava to this local setup. Do I do this using the https://github.com/kernelci/kernelci-jenkins and https://github.com/kernelci/lava-docker? Is there a BKM I could use?
I think the next step would be to get a LAVA instance up and
running, then you can start using it with kci_test by hand to
check it's all working fine.  Once you have that you can set up
Jenkins if you want to create an actual CI pipeline to build
kernels and run tests automatically.

There is nothing specific to KernelCI when installing LAVA, so
following the regular documentation should be fine.  You can use
the Docker image from lava-docker but it's not required.  See
this part of the documentation for more details on how to get
started:

  https://kernelci.org/docs/labs/lava/

Initially you should be able to set it up with a QEMU instance to
run tests with no extra hardware.

Best wishes,
Guillaume


 

*From:* Krishnan, Neelima
*Sent:* Tuesday, April 6, 2021 6:35 PM
*To:* Guillaume Tucker <guillaume.tucker@collabora.com>
*Cc:* kernelci@groups.io
*Subject:* RE: Kernel CI setup

 

Hi Guillaume,

 

Thank you for the quick reply. I was able to go forward with backend installation. The problem was setting of the url I had set in the curl command.

curl -v -XPOST -H "Content-Type: application/json" -H "Authorization: master-key" "http://localhost:8888/token <http://localhost:8888/token>" -d
'{"email": "xxxx@xx.xx <mailto:xxxx@xx.xx>", "username": "xxx", "admin": 1}'

I had set this to point to http://api.mydomain.local:8888.

I got a token from the backend.

 

For setting up the front end, do I use the https://github.com/kernelci/kernelci-frontend-config.git ? Do I use the same VM as the one I had set up for back end or do I create a new VM “kernelci-frontend”?

 

Either way, when I run the ansible command I get this:

 

TASK [install-deps : Centos 7.3 nodejs Hack (bug 1481008 / 1481470)] ******************************************************************************************

fatal: [kernelci-frontend]: FAILED! => {"msg": "The conditional check
'ansible_distribution == \"CentOS\" and ansible_distribution_major_version == \"7\" and ansible_distribution_version | search(\"7.3\")' failed. The error was: template error while templating string: no filter named 'search'. String: {% if ansible_distribution == \"CentOS\" and ansible_distribution_major_version == \"7\" and ansible_distribution_version | search(\"7.3\") %} True {% else %} False {% endif %}\n\nThe error
appears to be in '/home/labuser/kernelci/kbc/kernelci-frontend-config/roles/install-deps/tasks/main.yml': line 28, column 3, but may\nbe elsewhere
in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Centos 7.3 nodejs Hack (bug 1481008 / 1481470)\n  ^ here\n"}

I did update the secret.yml with the token I for from backend.

Neelima

 

*From:* Guillaume Tucker <guillaume.tucker@collabora.com <mailto:guillaume.tucker@collabora.com>>
*Sent:* Tuesday, April 6, 2021 1:29 AM
*To:* Krishnan, Neelima <neelima.krishnan@intel.com <mailto:neelima.krishnan@intel.com>>
*Cc:* kernelci@groups.io <mailto:kernelci@groups.io>
*Subject:* Re: Kernel CI setup

 

+kernelci@groups.io <mailto:+kernelci@groups.io>

 

Hello,

 

On 06/04/2021 05:47, Krishnan, Neelima wrote:

Guillaume,

 

I am trying to setup the kernel ci local instance in my lab, as per
instructions from: https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

This documentation should still be mostly correct although it
might have a few inaccuracies and gaps to fill.  Also, the
installation steps are quite lengthy and complicated even for
someone familiar with the code.  Having new people like you
trying to install it and providing feedback is a great way of
improving things.

I am not using dockers for installation. Instead I am trying out the VM method. I was able to use the kernelci-backend-config git to install
the backend on a VM. Now I am trying to generate the tokens as per instructions in the INSTALL.md @ https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

 

Here is what I get.. This is not a proxy issue. I am not sure how to move forward. Can you please advise on how to proceed

 

* Connected to api.mydomain.local (127.0.0.1) port 8888 (#0)

> POST /token HTTP/1.1

> Host: api.mydomain.local:8888

> User-Agent: curl/7.64.0

> Accept: */*

> Content-Type: application/json

> Authorization: master-key

> Content-Length: 63



* upload completely sent off: 63 out of 63 bytes

< HTTP/1.1 403 Operation not permitted: provided token is not authorized

< Content-Length: 12

< Vary: Accept-Encoding

< Server: TornadoServer/3.2.2

< Date: Tue, 06 Apr 2021 01:45:25 GMT

< Access-Control-Allow-Headers: authorization

< Content-Type: application/json; charset=UTF-8



* Connection #0 to host api.mydomain.local left intact

A few things to check:

** Is the kernelci-backend server process running?*

To start it manually (for example, on port 5001):

cd kernelci-backend/app
python server.py --port=5001

To start the Celery process manually:

cd kernelci-backend/app
sudo python \
    -OO \
    -R \
    /srv/.venv/kernelci-backend/bin/celery worker \
    -Ofair \
    --without-gossip \
    -autoscale=24,6 \
    --loglevel=INFO \
    --app=taskqueue \
    --pidfile=/tmp/kernelci-celery.pid

If you start them manually, you'll see the logs directly in the
terminals and that should help debugging issues.  On a real
deployment, these services are typically run with systemd and the
backend is behind a web server such as nginx.

** Is the Mongo DB service running?*

$ sudo systemctl | grep mongo
mongod.service  loaded active running   MongoDB Database Server
*
* Which API tokens have been created?*

$ mongo kernel-ci
db['api-token'].find({}, {token: 1, properties: 1})
{ "_id" : ObjectId("5bc51d7bb8d49ee75dbd5a6e"), "properties" : [ 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0 ], "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" }
{ "_id" : ObjectId("5dd80903469d6ddc79e7f7cc"), "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "properties" : [ 0, 0, 0, 1, 1, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0 ] }
*
* Can the token be used with the API?*

$ curl -X GET -H "Authorization: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" http://localhost:5001/version <http://localhost:5001/version>
{"code":200,"result":[{"full_version":"2020.11.2","version":"2020.11.2"}]}

If all these things work, then it's possible to start sending
build and test results to it manually.  Setting up a frontend
instance would be a logical next step, and potentially Jenkins to
have a fully automated pipeline although that's not really needed
in a local development setup.

Hope this helps!

Best wishes,
Guillaume

 

 

 


Re: Please add/remove android kernels

Mark Brown
 

On Fri, Apr 09, 2021 at 09:22:33AM -0700, Todd Kjos via groups.io wrote:
Is email still the right way for me to request this sort of thing? Or
should I be self-serving by submitting the proposed changes myself?
Email is still fine, but if you want to submit pull requests that'd also
be fine - it'd cut out the middleman a bit and reduce the potential for
transcription errors I guess. In any case:

https://github.com/kernelci/kernelci-core/pull/641

hopefully does everything (some of the removals had already happened).


Re: Please add/remove android kernels

Todd Kjos <tkjos@...>
 

Is email still the right way for me to request this sort of thing? Or
should I be self-serving by submitting the proposed changes myself?

On Mon, Apr 5, 2021 at 4:27 PM Todd Kjos <tkjos@google.com> wrote:

We recently added some android common kernels and deprecated others.

Please add these to kerneci testing:

repo: https://android.googlesource.com/kernel/common
branches:

android13-5.10
android12-5.10-lts
android11-5.4-lts


Please remove these branches (I may have already mentioned some of them):

android-3.18-n-release
android-4.4-n
android-4.4-n-release
android-3.18-o-release
android-4.4-o-release
android-4.9-o-release


Thanks,

-Todd


Re: Kernel CI setup

Krishnan, Neelima <neelima.krishnan@...>
 

Hi,

 

Following up here… I was able to install kernelci frontend and backend locally. I commented off the Centos 7.3 nodejs Hack In roles/install-deps/tasks/main.yml And installed libhttp-parser-dev

In Debian buster VM manually. I am now able to see frontend running at http://127.0.0.1:5000/

 

I will send a PR to update the documentation with some gaps I see.

 

I want to integrate the Jenkins and Lava to this local setup. Do I do this using the https://github.com/kernelci/kernelci-jenkins and https://github.com/kernelci/lava-docker? Is there a BKM I could use?

 

Thanks,

Neelima

From: Krishnan, Neelima
Sent: Tuesday, April 6, 2021 6:35 PM
To: Guillaume Tucker <guillaume.tucker@...>
Cc: kernelci@groups.io
Subject: RE: Kernel CI setup

 

Hi Guillaume,

 

Thank you for the quick reply. I was able to go forward with backend installation. The problem was setting of the url I had set in the curl command.

curl -v -XPOST -H "Content-Type: application/json" -H "Authorization: master-key" "http://localhost:8888/token" -d '{"email": "xxxx@...", "username": "xxx", "admin": 1}'

I had set this to point to http://api.mydomain.local:8888.

I got a token from the backend.

 

For setting up the front end, do I use the https://github.com/kernelci/kernelci-frontend-config.git ? Do I use the same VM as the one I had set up for back end or do I create a new VM “kernelci-frontend”?

 

Either way, when I run the ansible command I get this:

 

TASK [install-deps : Centos 7.3 nodejs Hack (bug 1481008 / 1481470)] ******************************************************************************************

fatal: [kernelci-frontend]: FAILED! => {"msg": "The conditional check 'ansible_distribution == \"CentOS\" and ansible_distribution_major_version == \"7\" and ansible_distribution_version | search(\"7.3\")' failed. The error was: template error while templating string: no filter named 'search'. String: {% if ansible_distribution == \"CentOS\" and ansible_distribution_major_version == \"7\" and ansible_distribution_version | search(\"7.3\") %} True {% else %} False {% endif %}\n\nThe error appears to be in '/home/labuser/kernelci/kbc/kernelci-frontend-config/roles/install-deps/tasks/main.yml': line 28, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Centos 7.3 nodejs Hack (bug 1481008 / 1481470)\n  ^ here\n"}

I did update the secret.yml with the token I for from backend.

Neelima

 

From: Guillaume Tucker <guillaume.tucker@...>
Sent: Tuesday, April 6, 2021 1:29 AM
To: Krishnan, Neelima <neelima.krishnan@...>
Cc: kernelci@groups.io
Subject: Re: Kernel CI setup

 

 

Hello,

 

On 06/04/2021 05:47, Krishnan, Neelima wrote:

Guillaume,

 

I am trying to setup the kernel ci local instance in my lab, as per instructions from: https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

This documentation should still be mostly correct although it
might have a few inaccuracies and gaps to fill.  Also, the
installation steps are quite lengthy and complicated even for
someone familiar with the code.  Having new people like you
trying to install it and providing feedback is a great way of
improving things.

I am not using dockers for installation. Instead I am trying out the VM method. I was able to use the kernelci-backend-config git to install the backend on a VM. Now I am trying to generate the tokens as per instructions in the INSTALL.md @ https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

 

Here is what I get.. This is not a proxy issue. I am not sure how to move forward. Can you please advise on how to proceed

 

* Connected to api.mydomain.local (127.0.0.1) port 8888 (#0)

> POST /token HTTP/1.1

> Host: api.mydomain.local:8888

> User-Agent: curl/7.64.0

> Accept: */*

> Content-Type: application/json

> Authorization: master-key

> Content-Length: 63

* upload completely sent off: 63 out of 63 bytes

< HTTP/1.1 403 Operation not permitted: provided token is not authorized

< Content-Length: 12

< Vary: Accept-Encoding

< Server: TornadoServer/3.2.2

< Date: Tue, 06 Apr 2021 01:45:25 GMT

< Access-Control-Allow-Headers: authorization

< Content-Type: application/json; charset=UTF-8

* Connection #0 to host api.mydomain.local left intact

A few things to check:

* Is the kernelci-backend server process running?

To start it manually (for example, on port 5001):

cd kernelci-backend/app
python server.py --port=5001


To start the Celery process manually:

cd kernelci-backend/app
sudo python \
    -OO \
    -R \
    /srv/.venv/kernelci-backend/bin/celery worker \
    -Ofair \
    --without-gossip \
    -autoscale=24,6 \
    --loglevel=INFO \
    --app=taskqueue \
    --pidfile=/tmp/kernelci-celery.pid


If you start them manually, you'll see the logs directly in the
terminals and that should help debugging issues.  On a real
deployment, these services are typically run with systemd and the
backend is behind a web server such as nginx.

* Is the Mongo DB service running?

$ sudo systemctl | grep mongo
mongod.service  loaded active running   MongoDB Database Server


* Which API tokens have been created?


$ mongo kernel-ci
> db['api-token'].find({}, {token: 1, properties: 1})
{ "_id" : ObjectId("5bc51d7bb8d49ee75dbd5a6e"), "properties" : [ 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0 ], "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" }
{ "_id" : ObjectId("5dd80903469d6ddc79e7f7cc"), "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "properties" : [ 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ] }


* Can the token be used with the API?


$ curl -X GET -H "Authorization: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" http://localhost:5001/version
{"code":200,"result":[{"full_version":"2020.11.2","version":"2020.11.2"}]}

If all these things work, then it's possible to start sending
build and test results to it manually.  Setting up a frontend
instance would be a logical next step, and potentially Jenkins to
have a fully automated pipeline although that's not really needed
in a local development setup.

Hope this helps!

Best wishes,
Guillaume

 

 

 


Re: Kernel CI setup

Krishnan, Neelima <neelima.krishnan@...>
 

Hi Guillaume,

 

Thank you for the quick reply. I was able to go forward with backend installation. The problem was setting of the url I had set in the curl command.

curl -v -XPOST -H "Content-Type: application/json" -H "Authorization: master-key" "http://localhost:8888/token" -d '{"email": "xxxx@...", "username": "xxx", "admin": 1}'

I had set this to point to http://api.mydomain.local:8888.

I got a token from the backend.

 

For setting up the front end, do I use the https://github.com/kernelci/kernelci-frontend-config.git ? Do I use the same VM as the one I had set up for back end or do I create a new VM “kernelci-frontend”?

 

Either way, when I run the ansible command I get this:

 

TASK [install-deps : Centos 7.3 nodejs Hack (bug 1481008 / 1481470)] ******************************************************************************************

fatal: [kernelci-frontend]: FAILED! => {"msg": "The conditional check 'ansible_distribution == \"CentOS\" and ansible_distribution_major_version == \"7\" and ansible_distribution_version | search(\"7.3\")' failed. The error was: template error while templating string: no filter named 'search'. String: {% if ansible_distribution == \"CentOS\" and ansible_distribution_major_version == \"7\" and ansible_distribution_version | search(\"7.3\") %} True {% else %} False {% endif %}\n\nThe error appears to be in '/home/labuser/kernelci/kbc/kernelci-frontend-config/roles/install-deps/tasks/main.yml': line 28, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Centos 7.3 nodejs Hack (bug 1481008 / 1481470)\n  ^ here\n"}

I did update the secret.yml with the token I for from backend.

Neelima

 

From: Guillaume Tucker <guillaume.tucker@...>
Sent: Tuesday, April 6, 2021 1:29 AM
To: Krishnan, Neelima <neelima.krishnan@...>
Cc: kernelci@groups.io
Subject: Re: Kernel CI setup

 

 

Hello,

 

On 06/04/2021 05:47, Krishnan, Neelima wrote:

Guillaume,

 

I am trying to setup the kernel ci local instance in my lab, as per instructions from: https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

This documentation should still be mostly correct although it
might have a few inaccuracies and gaps to fill.  Also, the
installation steps are quite lengthy and complicated even for
someone familiar with the code.  Having new people like you
trying to install it and providing feedback is a great way of
improving things.

I am not using dockers for installation. Instead I am trying out the VM method. I was able to use the kernelci-backend-config git to install the backend on a VM. Now I am trying to generate the tokens as per instructions in the INSTALL.md @ https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

 

Here is what I get.. This is not a proxy issue. I am not sure how to move forward. Can you please advise on how to proceed

 

* Connected to api.mydomain.local (127.0.0.1) port 8888 (#0)

> POST /token HTTP/1.1

> Host: api.mydomain.local:8888

> User-Agent: curl/7.64.0

> Accept: */*

> Content-Type: application/json

> Authorization: master-key

> Content-Length: 63

* upload completely sent off: 63 out of 63 bytes

< HTTP/1.1 403 Operation not permitted: provided token is not authorized

< Content-Length: 12

< Vary: Accept-Encoding

< Server: TornadoServer/3.2.2

< Date: Tue, 06 Apr 2021 01:45:25 GMT

< Access-Control-Allow-Headers: authorization

< Content-Type: application/json; charset=UTF-8

* Connection #0 to host api.mydomain.local left intact

A few things to check:

* Is the kernelci-backend server process running?

To start it manually (for example, on port 5001):

cd kernelci-backend/app
python server.py --port=5001


To start the Celery process manually:

cd kernelci-backend/app
sudo python \
    -OO \
    -R \
    /srv/.venv/kernelci-backend/bin/celery worker \
    -Ofair \
    --without-gossip \
    -autoscale=24,6 \
    --loglevel=INFO \
    --app=taskqueue \
    --pidfile=/tmp/kernelci-celery.pid


If you start them manually, you'll see the logs directly in the
terminals and that should help debugging issues.  On a real
deployment, these services are typically run with systemd and the
backend is behind a web server such as nginx.

* Is the Mongo DB service running?

$ sudo systemctl | grep mongo
mongod.service  loaded active running   MongoDB Database Server


* Which API tokens have been created?


$ mongo kernel-ci
> db['api-token'].find({}, {token: 1, properties: 1})
{ "_id" : ObjectId("5bc51d7bb8d49ee75dbd5a6e"), "properties" : [ 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0 ], "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" }
{ "_id" : ObjectId("5dd80903469d6ddc79e7f7cc"), "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "properties" : [ 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ] }


* Can the token be used with the API?


$ curl -X GET -H "Authorization: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" http://localhost:5001/version
{"code":200,"result":[{"full_version":"2020.11.2","version":"2020.11.2"}]}

If all these things work, then it's possible to start sending
build and test results to it manually.  Setting up a frontend
instance would be a logical next step, and potentially Jenkins to
have a fully automated pipeline although that's not really needed
in a local development setup.

Hope this helps!

Best wishes,
Guillaume

 

 

 


Re: Kernel CI setup

Guillaume Tucker
 

+kernelci@groups.io

Hello,

On 06/04/2021 05:47, Krishnan, Neelima wrote:

Guillaume,

 

I am trying to setup the kernel ci local instance in my lab, as per instructions from: https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance
This documentation should still be mostly correct although it
might have a few inaccuracies and gaps to fill.  Also, the
installation steps are quite lengthy and complicated even for
someone familiar with the code.  Having new people like you
trying to install it and providing feedback is a great way of
improving things.

I am not using dockers for installation. Instead I am trying out the VM method. I was able to use the kernelci-backend-config git to install the backend on a VM. Now I am trying to generate the tokens as per instructions in the INSTALL.md @ https://github.com/kernelci/kernelci-doc/wiki/Setting-up-a-local-development-instance

 

Here is what I get.. This is not a proxy issue. I am not sure how to move forward. Can you please advise on how to proceed

 

* Connected to api.mydomain.local (127.0.0.1) port 8888 (#0)

POST /token HTTP/1.1
Host: api.mydomain.local:8888
User-Agent: curl/7.64.0
Accept: */*
Content-Type: application/json
Authorization: master-key
Content-Length: 63
 
* upload completely sent off: 63 out of 63 bytes

< HTTP/1.1 403 Operation not permitted: provided token is not authorized

< Content-Length: 12

< Vary: Accept-Encoding

< Server: TornadoServer/3.2.2

< Date: Tue, 06 Apr 2021 01:45:25 GMT

< Access-Control-Allow-Headers: authorization

< Content-Type: application/json; charset=UTF-8



* Connection #0 to host api.mydomain.local left intact
A few things to check:

** Is the kernelci-backend server process running?*

To start it manually (for example, on port 5001):

cd kernelci-backend/app
python server.py --port=5001

To start the Celery process manually:

cd kernelci-backend/app
sudo python \
    -OO \
    -R \
    /srv/.venv/kernelci-backend/bin/celery worker \
    -Ofair \
    --without-gossip \
    -autoscale=24,6 \
    --loglevel=INFO \
    --app=taskqueue \
    --pidfile=/tmp/kernelci-celery.pid

If you start them manually, you'll see the logs directly in the
terminals and that should help debugging issues.  On a real
deployment, these services are typically run with systemd and the
backend is behind a web server such as nginx.

** Is the Mongo DB service running?*

$ sudo systemctl | grep mongo
mongod.service  loaded active running   MongoDB Database Server
*
*** Which API tokens have been created?*

$ mongo kernel-ci
db['api-token'].find({}, {token: 1, properties: 1})
{ "_id" : ObjectId("5bc51d7bb8d49ee75dbd5a6e"), "properties" : [ 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0 ], "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" }
{ "_id" : ObjectId("5dd80903469d6ddc79e7f7cc"), "token" : "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "properties" : [ 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ] }
*
*** Can the token be used with the API?*

$ curl -X GET -H "Authorization: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" http://localhost:5001/version
{"code":200,"result":[{"full_version":"2020.11.2","version":"2020.11.2"}]}

If all these things work, then it's possible to start sending
build and test results to it manually.  Setting up a frontend
instance would be a logical next step, and potentially Jenkins to
have a fully automated pipeline although that's not really needed
in a local development setup.

Hope this helps!

Best wishes,
Guillaume


 


Please add/remove android kernels

Todd Kjos <tkjos@...>
 

We recently added some android common kernels and deprecated others.

Please add these to kerneci testing:

branches:
android13-5.10
android12-5.10-lts
android11-5.4-lts

Please remove these branches (I may have already mentioned some of them):
android-3.18-n-release
android-4.4-n
android-4.4-n-release
android-3.18-o-release
android-4.4-o-release
android-4.9-o-release

Thanks,

-Todd


Re: can't access kernelci results

Todd Kjos <tkjos@...>
 

Working again. Thanks Guillaume.


On Mon, Apr 5, 2021 at 1:58 PM Guillaume Tucker <guillaume.tucker@...> wrote:
On 05/04/2021 21:39, Guenter Roeck via groups.io wrote:
> On Mon, Apr 5, 2021 at 12:30 PM Todd Kjos via groups.io
> <tkjos=google.com@groups.io> wrote:
>>
>> I get errors when attempting to access https://linux.kernelci.org/job/android/. Is this a known issue? Any ETA for a fix?
>>
> Looks like the problem is not limited to android. I get error 500 for
> jobs and build results.

That was another case of the mongod service stopping, it's pretty
rare but has happened a couple of times before.  Things are now
back to normal, but we've missed a few results while it was down.
Sorry for the disruption.

We've deployed a monitoring tool on some servers but not yet on
this one, it should happen soon so this kind of issues would be
detected much quicker.  We've also started looking into using a
Mongo database in Azure rather than a manual install to reduce
maintenance work.

Best wishes,
Guillaume


Re: can't access kernelci results

Guillaume Tucker
 

On 05/04/2021 21:39, Guenter Roeck via groups.io wrote:
On Mon, Apr 5, 2021 at 12:30 PM Todd Kjos via groups.io
<tkjos=google.com@groups.io> wrote:

I get errors when attempting to access https://linux.kernelci.org/job/android/. Is this a known issue? Any ETA for a fix?
Looks like the problem is not limited to android. I get error 500 for
jobs and build results.
That was another case of the mongod service stopping, it's pretty
rare but has happened a couple of times before. Things are now
back to normal, but we've missed a few results while it was down.
Sorry for the disruption.

We've deployed a monitoring tool on some servers but not yet on
this one, it should happen soon so this kind of issues would be
detected much quicker. We've also started looking into using a
Mongo database in Azure rather than a manual install to reduce
maintenance work.

Best wishes,
Guillaume


Re: can't access kernelci results

Guenter Roeck
 

On Mon, Apr 5, 2021 at 12:30 PM Todd Kjos via groups.io
<tkjos=google.com@groups.io> wrote:

I get errors when attempting to access https://linux.kernelci.org/job/android/. Is this a known issue? Any ETA for a fix?
Looks like the problem is not limited to android. I get error 500 for
jobs and build results.

Guenter

-Todd


can't access kernelci results

Todd Kjos <tkjos@...>
 

I get errors when attempting to access https://linux.kernelci.org/job/android/. Is this a known issue? Any ETA for a fix?

-Todd


Re: New kernelci.org website rollout on Friday 2nd April

Guillaume Tucker
 

On 29/03/2021 11:22, Guillaume Tucker wrote:
A new website to provide a better home page and a central place
for documentation is going to be deployed on Friday 2nd April
starting at 09:00 UTC. This will be hosted directly on
kernelci.org so the web dashboard will move to linux.kernelci.org
with redirections so existing links will still work. There will
be a short period during which the website and dashboard won't be
accessible, an email will be sent to confirm when the update is
complete.

Here's a preview of the new website:

https://static.staging.kernelci.org/

with a documentation section which includes files from other
KernelCI projects such as kernelci-core:

https://static.staging.kernelci.org/docs/

The source code for the new website is hosted on GitHub:

https://github.com/kernelci/kernelci-project
This has now all been deployed on kernelci.org and
linux.kernelci.org - thank you for your patience. Please let us
know if you notice anything unexpected.

Best wishes,
Guillaume


New kernelci.org website rollout on Friday 2nd April

Guillaume Tucker
 

A new website to provide a better home page and a central place
for documentation is going to be deployed on Friday 2nd April
starting at 09:00 UTC. This will be hosted directly on
kernelci.org so the web dashboard will move to linux.kernelci.org
with redirections so existing links will still work. There will
be a short period during which the website and dashboard won't be
accessible, an email will be sent to confirm when the update is
complete.

Here's a preview of the new website:

https://static.staging.kernelci.org/

with a documentation section which includes files from other
KernelCI projects such as kernelci-core:

https://static.staging.kernelci.org/docs/

The source code for the new website is hosted on GitHub:

https://github.com/kernelci/kernelci-project

Please let us know if you have any questions or concerns.

Best wishes,
Guillaume

1 - 20 of 1103