The "modify" request type is when the PR does removals as well as additions to the list. The valid reason for doing this
would be changing the URL of a library after the repo is renamed or the library moved to another repo.
This request type requires a manual review and merge, but the new URLs should be automatically processed to save the
reviewer from having to check it and work with the library submitter to resolve any issues that are found.
In this case, the `arduino/arduino-lint-action`'s `library-manager` input needs to be set to "update" instead of the
previously hardcoded "submit". The correct setting will be provided by the parser, so the workflow only needs to
implement the use of that setting.
Arduino Lint prints a summary report of the result of linting Arduino projects to stdout. It also offers the option of saving a JSON formatted report to a file.
In addition to rejecting the submission on an error result from Arduino Lint, the workflow also advocates for best
practices in the libraries of Library Manager by commenting a copy of the report to the PR thread if any warnings are
generated.
The machine-readable JSON format of the report file makes it easy to parse in the workflow to determine the warning
count. However, this JSON format is not terribly friendly to human readers. The `text` format report printed to stdout is
intended for that purpose. Previously, the JSON formatted report was commented to the PR thread, resulting in an
unpleasant experience for the submitter.
In the intended application of the `arduino/arduino-lint-action` GitHub Actions action, the report is printed to the log,
the interested user can access the report in the workflow run log, and any machine applications use the report file.
However, in this specialized use case, we need both a text format and a JSON format report file. Although that capability
could be added to the action, it would not likely be of use for other applications. For this reason, it makes more sense
to simply use the Arduino Lint application directly in the workflow. This really doesn't introduce any significant
complexity, since the action is only a thin wrapper.
A workflow artifact is used to transfer the PR diff file from the `diff` job to the `parse` job. Once the artifact has
been downloaded by the `parse` job, it no longer serves any purpose.
It's possible the artifact might serve as a vector for exporting secrets from the workflow. Even though I don't have any
specific reasons to believe it is possible to cause secrets to be written to the artifact and the repository doesn't
currently have any secrets beyond `GITHUB_TOKEN`, nor need for any, it's still best to remove the unnecessary artifact.
The workflow already handles all expected failures in a manner that is as automated and friendly to the submitter as
possible.
However, there is always the chance for unexpected failures caused by a bug or service outage, which are in no way the
fault of the submitter. In this event, the workflow would previously fail without any clear explanation of what had
happened. This would be likely to cause confusion to the submitter. Since the system is very automated, this failure
might also go unnoticed by the repository maintainers.
A better way to handle unexpected failures is to:
- Add a special label ("status: maintenance required").
- Request a review from the Tooling Team.
- Comment to explain to the submitter that something went wrong and we will investigate.
GitHub Actions workflow jobs default to the `if: success()` configuration. In this configuration, the job only runs when
the result of its job dependencies was success. When configuring a job to run on a failure result with `if: failure()` it
is logical to assume that the behavior would be inverted: the job would run only when the result of its dependency job
was failure. It does this, but also runs when its dependency job was canceled due to a failure of its own dependency.
This behavior of GitHub Actions resulted in the failure handling jobs running when they were not intended to. That is
avoided by specifying the exact job whose failure they were intended to handle in the conditional. It is still necessary
to use failure() in the conditional, otherwise they retain the default success() configuration and never run on a failure.
In the event a PR is detected as something other than a library submission, a review is requested from the Tooling Team
and a comment is made to the PR thread explaining the situation.
Previously, this job was named `request-review`. However, there are other circumstances under which a review will be
requested (e.g., merge conflict). So this was not a very good job name.
This job name is not referenced anywhere else in the workflow, so it currently only serves a documentation role and
changing it has no functional effect.
The `octokit/request-action` GitHub Actions action is designed to accept API request parameters as arbitrary action
inputs in the workflow. Because they have made no attempt to define all possible input keys in the action metadata,
normal and correct usage of the action cases GitHub Actions to display warnings in the workflow run log and summary. For
example:
Unexpected input(s) 'owner', 'repo', 'issue_number', 'labels', valid inputs are ['route', 'mediaType']
This has the potential to cause confusion, so I added a comment to the workflow explaining that this warning is expected
and doesn't indicate a problem. That comment somehow ended on a random occurence of `octokit/request-action` in the
workflow. It makes most sense to put it on the first usage of the action in the workflow, to make the information easy to
find.
It's certain that merge conflicts will occur as submitters take time to resolve blocking issues and other submissions are
accepted in the meantime. The PR merge will fail in this case.
Previously there was no provision for handling merge conflicts. The workflow run just failed at the merge job with no
comment from the bot.
The goal is for the automated system to work with the submitter as much as possible. Towards this goal, when the merge
fails, the bot will now comment to explain the problem and provide a link to a general tutorial for using the GitHub PR
merge conflict system.
A review is requested from the Tooling Team so that they can assist in the event the user is unable to resolve the merge
conflict, or the failure was caused by something else.
There is no good reason for a submission to consist of more than one commit.
As the submitter works with the bot to produce a compliant submission, they will sometimes end up with PRs that consist
of multiple non-atomic commits, which would pollute the repository's commit history if not squashed at the time of the
merge.
The "Manage PRs" workflow can be triggered by either a `pull_request_target` or `issue_comment` event. This means that
the `issue_number` parameter of the GitHub API request must be defined using both the `github.event.pull_request.number`
and `github.event.issue.number` properties because a different one is defined depending on which event was the trigger.
The exception is the API request for the bot comment used to provide immediate feedback when the workflow is triggered by
a comment. This API request is only ever used with an issue_comment event trigger, so the
github.event.pull_request.number property is not needed.
When there is a problem with the submission that must be resolved in the library repository, a comment mentioning
`@ArduinoBot` is used to trigger the "Manage PRs" workflow.
We are accustomed to seeing the checks status in the PR thread when workflows are running. However, with a comment
triggered workflow this does not happen. The only indication of the workflow running is on the "Actions" tab. This can
leave the submitter wondering if their comment had any effect as the workflow takes some time to run before providing
feedback.
This uncertainty can be avoided by making the bot immediately comment to acknowledge that the check is in progress.
The Library Manager indexer logs are available for each library in the index. Example:
http://downloads.arduino.cc/libraries/logs/github.com/cmaglie/FlashStorage/
This can be a valuable tool for monitoring the indexing of releases, allowing the library authors to troubleshoot without
the need for assistance from a maintainer of this repository.
Previously, as soon as a submission was compliant, the PR was merged silently. This might leave the submitter wondering
whether the submission process is complete.
There is some uncertain delay for the library release to be added to the server, the index to be generated, and the index
to propagate through the CDN, which might result in an increased support burden as the submitters comment or open issues
asking what is happening.
This is avoided by making the bot comment on the PR thread after the PR is merged, notifying the submitter that the
process was successful and that there will be some delay before the library is available from Library Manager.
Indexer logs URLs are provided for each submission, allowing the submitter to monitor the indexing process, both
immediately after the acceptance, as well as after they make future releases.
The "Manage PRs" GitHub Actions workflow generates a matrix job for each library submitted by the PR. The default job
name is generated from the job's matrix object. This contains the complete submission data, which results in a long and
somewhat cryptic job name that can make the workflow run more difficult to interpret.
The only necessary information is the description of the job's purpose ("check") and the submission URL (multiple URLs
per PR are supported). A custom job name allows for only using this information in the job name.
The official policy is that anyone is allowed to submit any library to Library Manager, regardless of whether they have
any involvement with the development and maintenance of the library.
As someone very much involved in the submission process, I have always wondered this myself. So it is very important to
clearly document this.
Some of the wording of the existing documentation implied that only the owner of the library could submit it, so this
text has been adjusted as well.
The yamllint configuration advocates for keeping line lengths within 120 characters. While exceeding this length only
results in a warning, I think it is beneficial to stay within that limit when it is possible and doesn't have a harmful
effect. In that spirit, I have reduced the long lines where this was easily done. There remain a few that are either not
possible or else not reasonable to reduce, and that's OK.
After I set up caching in the template workflows, doubts were raised about whether it provided any benefits. I don't know
enough about this subject to make a call on that and I have been unable to get any more information on the subject.
Since the caching significantly increases the complexity of the workflows, which may make them more difficult to maintain
and contribute to, I think it's best to just remove all the caching for now. I hope to eventually be able to revisit this
topic and restore caching in any workflows where it is definitely beneficial.
On every push and pull request that affects relevant files, and periodically, run yamllint to check the YAML files of
the repository for issues.
The .yamllint.yml file is used to configure yamllint:
https://yamllint.readthedocs.io/en/stable/configuration.html
It will be helpful to the reader to be able to get an overview of the documentation content and quickly navigate to the
section of interest.
The table of contents are automatically generated using the markdown-toc tool.
Because it can be easy to forget to update the table of contents when the documentation content is changed, I have added
a CI workflow to check for missed updates to readme ToC. On every push or pull request that affects the repository's
documentation, it will check whether the table of contents matches the content.
At the time it was created, there was only one official Arduino development application: Arduino IDE. Since that time,
Arduino Web Editor and Arduino CLI have been created, both of which implement Library Manager in their own manners.
Previously, library submitters were not exposed to the internal workings of the Library Manager index generation system,
so they only needed to be concern with the public index file. Now the submitters will be interacting directly with the
Library Manager submission list. This might lead to some confusion between that list and the Library Manager index, so
it's important to be clear in the terminology used in the documentation.