Compare commits
85 Commits
purge-arch
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bba85d0549 | ||
|
|
af1c5695c6 | ||
|
|
a4224eaa56 | ||
|
|
264f6bd0f6 | ||
|
|
b1f074cc5d | ||
|
|
b30025b291 | ||
|
|
270bcc2d58 | ||
|
|
5e90acd5b3 | ||
|
|
a68d6f826d | ||
|
|
05266bd8ac | ||
|
|
26e2d664ec | ||
|
|
9a5331398d | ||
|
|
c14a071c8d | ||
|
|
e6e23bf1cf | ||
|
|
682cffc308 | ||
|
|
89d625229b | ||
|
|
927c3124b8 | ||
|
|
8dc187abc3 | ||
|
|
0089da7ecb | ||
|
|
ae81c12fc5 | ||
|
|
ca2246667c | ||
|
|
67cc5caaa5 | ||
|
|
a5e2b831b5 | ||
|
|
1151490ae8 | ||
|
|
dc01ff137e | ||
|
|
0f00303939 | ||
|
|
0bea35576a | ||
|
|
8bb3147e4e | ||
|
|
8956f1d292 | ||
|
|
0816049273 | ||
|
|
e5d5594775 | ||
|
|
83211a4923 | ||
|
|
ed04ff4017 | ||
|
|
804da83d7b | ||
|
|
bc46effe08 | ||
|
|
ddc32f45d0 | ||
|
|
a49077803c | ||
|
|
f2680c6221 | ||
|
|
08da394e71 | ||
|
|
bae26ccdb1 | ||
|
|
d0538ddf8b | ||
|
|
5a83ccefa9 | ||
|
|
5f34822b78 | ||
|
|
60c1df78f8 | ||
|
|
3159928d85 | ||
|
|
24cd357bb7 | ||
|
|
26e89f6467 | ||
|
|
6406f329ad | ||
|
|
843c618e01 | ||
|
|
914d188dcf | ||
|
|
7eddf73cf0 | ||
|
|
6ceee1e063 | ||
|
|
0b51a5e7c3 | ||
|
|
bf75f67ac5 | ||
|
|
5ab9f7d46a | ||
|
|
c39e924cf9 | ||
|
|
ee4634e435 | ||
|
|
ecf0a3a94f | ||
|
|
2cd6c04450 | ||
|
|
f9b727041d | ||
|
|
f55163208a | ||
|
|
3cf210a0dc | ||
|
|
f9d4f888eb | ||
|
|
301e019185 | ||
|
|
4df86b48d6 | ||
|
|
f421d82354 | ||
|
|
a301665db5 | ||
|
|
cbf1df2432 | ||
|
|
9527d7e290 | ||
|
|
a4c8146fd0 | ||
|
|
24248c4aad | ||
|
|
88da43ac4d | ||
|
|
1736a09b8c | ||
|
|
2edc4efb02 | ||
|
|
ce01549b76 | ||
|
|
4365494c73 | ||
|
|
2cf19bc7ac | ||
|
|
fd885ff12f | ||
|
|
3e73b8444a | ||
|
|
8c0fd0b960 | ||
|
|
f6188cc028 | ||
|
|
0d32406949 | ||
|
|
2860a20ad6 | ||
|
|
ed3af6676b | ||
|
|
05737bcde8 |
14
.claude/settings.local.json
Normal file
14
.claude/settings.local.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"permissions": {
|
||||||
|
"allow": [
|
||||||
|
"mcp__plugin_context-mode_context-mode__ctx_batch_execute",
|
||||||
|
"mcp__plugin_context7_context7__resolve-library-id",
|
||||||
|
"mcp__plugin_context7_context7__query-docs",
|
||||||
|
"Bash(go:*)",
|
||||||
|
"Bash(./awesome-docker:*)",
|
||||||
|
"Bash(tmux send-keys:*)",
|
||||||
|
"Bash(tmux capture-pane:*)",
|
||||||
|
"Bash(tmux:*)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
2
.github/CODEOWNERS
vendored
2
.github/CODEOWNERS
vendored
@@ -1 +1 @@
|
|||||||
*.md @veggiemonk @agebhar1 @dmitrytokarev @gesellix @mashb1t @moshloop @vegasbrianc @noteed
|
* @veggiemonk @agebhar1 @dmitrytokarev @gesellix @mashb1t @moshloop @vegasbrianc @noteed
|
||||||
|
|||||||
116
.github/CONTRIBUTING.md
vendored
116
.github/CONTRIBUTING.md
vendored
@@ -1,94 +1,56 @@
|
|||||||
# Contributing to awesome-docker
|
# Contributing to awesome-docker
|
||||||
|
|
||||||
First: if you're unsure or afraid of anything, just ask or submit the issue or pull request anyways. You won't be yelled at for giving your best effort. The worst that can happen is that you'll be politely asked to change something. We appreciate any sort of contributions, and don't want a wall of rules to get in the way of that.
|
Thanks for taking the time to contribute.
|
||||||
|
|
||||||
However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This document will cover what we're looking for. By addressing all the points we're looking for, it raises the chances we can quickly merge or address your contributions.
|
This repository is a curated list of Docker/container resources plus a Go-based maintenance CLI used by CI. Contributions are welcome for both content and tooling.
|
||||||
|
|
||||||
We appreciate and recognize [all contributors](https://github.com/veggiemonk/awesome-docker/graphs/contributors).
|
Please read and follow the [Code of Conduct](./CODE_OF_CONDUCT.md).
|
||||||
|
|
||||||
Please note that this project is released with a [Contributor Code of Conduct](https://github.com/veggiemonk/awesome-docker/blob/master/.github/CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
|
## What We Accept
|
||||||
|
|
||||||
# Table of Contents
|
- New high-quality Docker/container-related projects
|
||||||
|
- Fixes to descriptions, ordering, or categorization
|
||||||
|
- Removal of broken, archived, deprecated, or duplicate entries
|
||||||
|
- Improvements to the Go CLI and GitHub workflows
|
||||||
|
|
||||||
- [Mission Statement](#mission-statement)
|
## README Entry Rules
|
||||||
- [Quality Standards](#quality-standards)
|
|
||||||
- [Contribution Guidelines](#contribution-guidelines)
|
|
||||||
- [New Collaborators](#new-collaborators)
|
|
||||||
|
|
||||||
# Mission Statement
|
- Use one link per entry.
|
||||||
|
- Prefer GitHub project/repository URLs over marketing pages.
|
||||||
|
- Keep entries alphabetically sorted within their section.
|
||||||
|
- Keep descriptions concise and concrete.
|
||||||
|
- Use `:yen:` for paid/commercial services.
|
||||||
|
- Use `:ice_cube:` for stale projects (2+ years inactive).
|
||||||
|
- Do not use `:skull:`; archived/deprecated projects should be removed.
|
||||||
|
- Avoid duplicate links and redirect variants.
|
||||||
|
|
||||||
`awesome-docker` is a hand-crafted list for high-quality information about Docker and its resources. It should be related or compatible with Docker or containers. If it's just an image built on top of Docker, the project possibly belongs to other [awesome lists](https://github.com/sindresorhus/awesome). You can check the [awesome-selfhosted list](https://github.com/Kickball/awesome-selfhosted) or the [awesome-sysadmin list](https://github.com/n1trux/awesome-sysadmin) as well.
|
## Local Validation
|
||||||
If it's a **tutorial or a blog post**, they get outdated really quickly so we don't really put them on the list but if it is on a very advanced and/or specific topic, we will consider it!
|
|
||||||
If something is awesome, share it (pull request or [issue](https://github.com/veggiemonk/awesome-docker/issues/new) or [chat](https://gitter.im/veggiemonk/awesome-docker)), let us know why and we will help you!
|
|
||||||
|
|
||||||
# Quality Standards
|
```bash
|
||||||
|
# Build CLI
|
||||||
|
make build
|
||||||
|
|
||||||
Note that we can help you achieve those standards, just try your best and be brave.
|
# Validate README formatting and content
|
||||||
We'll guide you to the best of our abilities.
|
make lint
|
||||||
|
|
||||||
To be on the list, it would be **nice** if entries adhere to these quality standards:
|
# Run code tests (when touching Go code)
|
||||||
|
make test
|
||||||
|
|
||||||
- It should take less than 20 sec to find what is the project, how to install it and how to use it.
|
# Optional: full external checks (requires GITHUB_TOKEN)
|
||||||
- Generally useful to the community.
|
./awesome-docker check
|
||||||
- A project on GitHub with a well documented `README.md` file and plenty of examples is considered high quality.
|
./awesome-docker validate
|
||||||
- Clearly stating if an entry is related to (Linux) containers and not to Docker. There is an [awesome list](https://github.com/Friz-zy/awesome-linux-containers) for that.
|
```
|
||||||
- Clearly stating "what is it" i.e. which category it belongs to.
|
|
||||||
- Clearly stating "what is it for" i.e. mention a real problem it solves (even a small one). Make it clear for the next person.
|
|
||||||
- If it is a **WIP** (work in progress, not safe for production), please mention it. (Remember the time before Docker 1.0 ? ;-) )
|
|
||||||
- Always put the link to the GitHub project instead of the website!
|
|
||||||
|
|
||||||
To be on the list, the project **must** have:
|
## Pull Request Expectations
|
||||||
|
|
||||||
- How to setup/install the project
|
- Keep the PR focused to one logical change.
|
||||||
- How to use the project (examples)
|
- Explain what changed and why.
|
||||||
|
- If adding entries, include the target category.
|
||||||
|
- If removing entries, explain why (archived, broken, duplicate, etc.).
|
||||||
|
- Fill in the PR template checklist.
|
||||||
|
|
||||||
If your PR is not merged, we will tell you why so that you may be able to improve it.
|
## Maintainer Notes
|
||||||
But usually, we are pretty relaxed people, so just come and say hi, we'll figure something out together.
|
|
||||||
|
|
||||||
# Contribution Guidelines
|
- Changes should be reviewed before merge.
|
||||||
|
- Prefer helping contributors improve a PR over silently rejecting it.
|
||||||
## I want to share a project, what should I do?
|
- Keep `.github` documentation and workflows aligned with current tooling.
|
||||||
|
|
||||||
- **Adding to the list:** Submit a pull request or open an [issue](https://github.com/veggiemonk/awesome-docker/issues/new)
|
|
||||||
- **Removing from the list:** Submit a pull request or open an [issue](https://github.com/veggiemonk/awesome-docker/issues/new)
|
|
||||||
- Changing something else: Submit a pull request or open an [issue](https://github.com/veggiemonk/awesome-docker/issues/new)
|
|
||||||
- Don't know what to do: Open an [issue](https://github.com/veggiemonk/awesome-docker/issues/new) or join our [chat](https://gitter.im/veggiemonk/awesome-docker), let us know what's going on.
|
|
||||||
|
|
||||||
**join the chat:**
|
|
||||||
|
|
||||||
[](https://gitter.im/veggiemonk/awesome-docker)
|
|
||||||
|
|
||||||
or you can
|
|
||||||
|
|
||||||
**ping us on Twitter:**
|
|
||||||
|
|
||||||
* [veggiemonk](https://twitter.com/veggiemonk)
|
|
||||||
* [idomyowntricks](https://twitter.com/idomyowntricks)
|
|
||||||
* [gesellix](https://twitter.com/gesellix)
|
|
||||||
* [dmitrytokarev](https://twitter.com/dmitrytok)
|
|
||||||
|
|
||||||
### Rules for Pull Request
|
|
||||||
|
|
||||||
- Each item should be limited to one link, no duplicates, no redirection (careful with `http` vs `https`!)
|
|
||||||
- The link should be the name of the package or project or website
|
|
||||||
- Description should be clear and concise (read it out loud to be sure)
|
|
||||||
- Description should follow the link, on the same line
|
|
||||||
- Entries are listed alphabetically, please respect the order
|
|
||||||
- If you want to add more than one link, please don't do all PR on the exact same line, it usually results in conflicts and your PR cannot be automatically merged...
|
|
||||||
|
|
||||||
Please contribute links to packages/projects you have used or are familiar with. This will help ensure high-quality entries.
|
|
||||||
|
|
||||||
#### Your commit message will be a [tweet](https://twitter.com/awesome_docker) so write a [good commit message](https://chris.beams.io/posts/git-commit/), keep that in mind :)
|
|
||||||
|
|
||||||
# New Collaborators
|
|
||||||
|
|
||||||
If you just joined the team of maintainers for this repo, first of all: WELCOME!
|
|
||||||
|
|
||||||
If it is your first time maintaining an open source project, read the [best practice guides for maintainers](https://opensource.guide/best-practices/).
|
|
||||||
|
|
||||||
Here are the few things you need to know:
|
|
||||||
* We don't push directly to the master branch. Every entry **MUST** be reviewed!
|
|
||||||
* Each entry should be in accordance to this quality standards and contribution guidelines.
|
|
||||||
* To ask a contributor to make a change, just copy paste this message [here](https://github.com/veggiemonk/awesome-docker/pull/289#issuecomment-285608004) and change few things like names and stuff. **The main idea is to help people making great projects.**
|
|
||||||
* If something seems weird, i.e. if you don't understand what a project does or the documentation is poor, don't hesitate to (nicely) ask for more explanation (see previous point).
|
|
||||||
* Say thank you to people who contribute to this project! It may not seems like much but respect and gratitude are important :D
|
|
||||||
|
|||||||
15
.github/ISSUE_TEMPLATE.md
vendored
15
.github/ISSUE_TEMPLATE.md
vendored
@@ -1,15 +0,0 @@
|
|||||||
Hi,
|
|
||||||
|
|
||||||
I would like to add a link.
|
|
||||||
|
|
||||||
**REPO**:
|
|
||||||
|
|
||||||
**DESCRIPTION**:
|
|
||||||
|
|
||||||
**AUTHOR**:
|
|
||||||
|
|
||||||
Or directly write it:
|
|
||||||
```markdown
|
|
||||||
[REPO](https://github.com/AUTHOR/REPO) - DESCRIPTION. By [@AUTHOR](https://github.com/AUTHOR)
|
|
||||||
```
|
|
||||||
|
|
||||||
21
.github/ISSUE_TEMPLATE/add-a-project.md
vendored
Normal file
21
.github/ISSUE_TEMPLATE/add-a-project.md
vendored
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
---
|
||||||
|
name: Add a project
|
||||||
|
about: Add a new project to the list
|
||||||
|
title: "add: [PROJECT_NAME] in [SECTION_NAME]"
|
||||||
|
labels: pending-evaluation
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Category:
|
||||||
|
Repository link:
|
||||||
|
Description (one sentence):
|
||||||
|
Author:
|
||||||
|
Why this should be in the list:
|
||||||
|
Notes (`:yen:` if relevant):
|
||||||
|
|
||||||
|
Or directly write it:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
[REPO](https://github.com/AUTHOR/REPO) - DESCRIPTION.
|
||||||
|
```
|
||||||
137
.github/MAINTENANCE.md
vendored
137
.github/MAINTENANCE.md
vendored
@@ -1,116 +1,81 @@
|
|||||||
# 🔧 Maintenance Guide for Awesome Docker
|
# Maintenance Guide
|
||||||
|
|
||||||
This guide helps maintainers keep the awesome-docker list up-to-date and high-quality.
|
This guide describes how maintainers keep the list and automation healthy.
|
||||||
|
|
||||||
## 🤖 Automated Systems
|
## Automated Workflows
|
||||||
|
|
||||||
### Weekly Health Reports
|
### Pull Requests / Weekly QA (`pull_request.yml`)
|
||||||
- **What**: Checks all GitHub repositories for activity, archived status, and maintenance
|
|
||||||
- **When**: Every Monday at 9 AM UTC
|
|
||||||
- **Where**: Creates/updates a GitHub issue with label `health-report`
|
|
||||||
- **Action**: Review the report and mark abandoned projects with `:skull:`
|
|
||||||
|
|
||||||
### Broken Links Detection
|
- Runs on pull requests and weekly on Saturday.
|
||||||
- **What**: Tests all links in README.md for availability
|
- Builds the Go CLI and runs `./awesome-docker validate`.
|
||||||
- **When**: Every Saturday at 2 AM UTC + on every PR
|
|
||||||
- **Where**: Creates/updates a GitHub issue with label `broken-links`
|
|
||||||
- **Action**: Fix or remove broken links, or add to exclusion list
|
|
||||||
|
|
||||||
### PR Validation
|
### Broken Links Report (`broken_links.yml`)
|
||||||
- **What**: Checks for duplicate links and basic validation
|
|
||||||
- **When**: On every pull request
|
|
||||||
- **Action**: Automated - contributors see results immediately
|
|
||||||
|
|
||||||
## 📋 Manual Maintenance Tasks
|
- Runs weekly on Saturday and on manual trigger.
|
||||||
|
- Executes `./awesome-docker check`.
|
||||||
|
- Opens/updates a `broken-links` issue when problems are found.
|
||||||
|
|
||||||
### Monthly Review (First Monday of the month)
|
### Weekly Health Report (`health_report.yml`)
|
||||||
1. Check health report issue for archived/stale projects
|
|
||||||
2. Mark archived projects with `:skull:` in README.md
|
|
||||||
3. Review projects with 2+ years of inactivity
|
|
||||||
4. Remove projects that are truly abandoned/broken
|
|
||||||
|
|
||||||
### Quarterly Deep Dive (Every 3 months)
|
- Runs weekly on Monday and on manual trigger.
|
||||||
1. Run: `npm run health-check` for detailed report
|
- Executes `./awesome-docker health` then `./awesome-docker report`.
|
||||||
2. Review project categories - are they still relevant?
|
- Opens/updates a `health-report` issue.
|
||||||
3. Check for popular new Docker tools to add
|
|
||||||
4. Update documentation links if newer versions exist
|
|
||||||
|
|
||||||
### Annual Cleanup (January)
|
### Deploy to GitHub Pages (`deploy-pages.yml`)
|
||||||
1. Remove all `:skull:` projects older than 1 year
|
|
||||||
2. Review CONTRIBUTING.md guidelines
|
|
||||||
3. Update year references in documentation
|
|
||||||
4. Check Node.js version requirements
|
|
||||||
|
|
||||||
## 🛠️ Maintenance Commands
|
- Runs on pushes to `master` and manual trigger.
|
||||||
|
- Builds website with `./awesome-docker build` and publishes `website/`.
|
||||||
|
|
||||||
|
## Day-to-Day Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Test all links (requires GITHUB_TOKEN)
|
# Build CLI
|
||||||
npm test
|
make build
|
||||||
|
|
||||||
# Test PR changes only
|
# README lint/validation
|
||||||
npm run test-pr
|
make lint
|
||||||
|
|
||||||
# Generate health report (requires GITHUB_TOKEN)
|
# Auto-fix formatting issues
|
||||||
npm run health-check
|
./awesome-docker lint --fix
|
||||||
|
|
||||||
# Build the website
|
# Link checks and health checks (requires GITHUB_TOKEN)
|
||||||
npm run build
|
make check
|
||||||
|
make health
|
||||||
# Update dependencies
|
make report
|
||||||
npm update
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 📊 Quality Standards
|
## Content Maintenance Policy
|
||||||
|
|
||||||
### Adding New Projects
|
- Remove archived/deprecated projects instead of tagging them.
|
||||||
- Must have clear documentation (README with install/usage)
|
- Remove broken links that cannot be fixed.
|
||||||
- Should have activity within last 18 months
|
- Keep sections alphabetically sorted.
|
||||||
- GitHub project preferred over website links
|
- Keep descriptions short and actionable.
|
||||||
- Must be Docker/container-related
|
|
||||||
|
|
||||||
### Marking Projects as Abandoned
|
## Suggested Review Cadence
|
||||||
Use `:skull:` emoji when:
|
|
||||||
- Repository is archived on GitHub
|
|
||||||
- No commits for 2+ years
|
|
||||||
- Project explicitly states it's deprecated
|
|
||||||
- Maintainer confirms abandonment
|
|
||||||
|
|
||||||
### Removing Projects
|
### Weekly
|
||||||
Only remove (don't just mark `:skull:`):
|
|
||||||
- Broken/404 links that can't be fixed
|
|
||||||
- Duplicate entries
|
|
||||||
- Spam or malicious projects
|
|
||||||
- Projects that never met quality standards
|
|
||||||
|
|
||||||
## 🚨 Emergency Procedures
|
- Triage open `broken-links` and `health-report` issues.
|
||||||
|
- Merge straightforward quality PRs.
|
||||||
|
|
||||||
### Critical Broken Links
|
### Monthly
|
||||||
If important resources are down:
|
|
||||||
1. Check if they moved (update URL)
|
|
||||||
2. Search for alternatives
|
|
||||||
3. Check Internet Archive for mirrors
|
|
||||||
4. Temporarily comment out until resolved
|
|
||||||
|
|
||||||
### Spam Pull Requests
|
- Review sections for stale/duplicate entries.
|
||||||
1. Close immediately
|
- Re-run `check` and `health` manually if needed.
|
||||||
2. Mark as spam
|
|
||||||
3. Block user if repeated offense
|
|
||||||
4. Don't engage in comments
|
|
||||||
|
|
||||||
## 📈 Metrics to Track
|
### Quarterly
|
||||||
|
|
||||||
- Total projects: ~731 GitHub repos
|
- Review `.github` docs and templates for drift.
|
||||||
- Health status: aim for <5% archived
|
- Confirm workflows still match repository tooling and policies.
|
||||||
- Link availability: aim for >98% working
|
|
||||||
- PR merge time: aim for <7 days
|
|
||||||
- Weekly contributor engagement
|
|
||||||
|
|
||||||
## 🤝 Getting Help
|
## Contributor Support
|
||||||
|
|
||||||
- Open a discussion in GitHub Discussions
|
When requesting PR changes, be explicit and actionable:
|
||||||
- Check AGENTS.md for AI assistant guidelines
|
|
||||||
- Review CONTRIBUTING.md for contributor info
|
- point to section/order problems,
|
||||||
|
- explain why a link should be removed,
|
||||||
|
- suggest exact wording when description quality is the issue.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
*Last updated: 2025-10-01*
|
Last updated: 2026-02-27
|
||||||
|
|||||||
58
.github/PULL_REQUEST_TEMPLATE.md
vendored
58
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,48 +1,28 @@
|
|||||||
<!-- Congrats on creating an Awesome Docker entry! 🎉 -->
|
# Summary
|
||||||
|
|
||||||
<!-- **Remember that entries are ordered alphabetically** -->
|
Describe what changed and why.
|
||||||
|
|
||||||
# TLDR
|
## Scope
|
||||||
* all entries sorted alphabetically (from A to Z),
|
|
||||||
* If paying service add :heavy_dollar_sign:
|
|
||||||
* If WIP add :construction:
|
|
||||||
* clear and short description of the project
|
|
||||||
* project MUST have: How to setup/install
|
|
||||||
* project MUST have: How to use (examples)
|
|
||||||
* we can help you get there :)
|
|
||||||
|
|
||||||
## Quality Standards
|
- [ ] README entries/content
|
||||||
|
- [ ] Go CLI/tooling
|
||||||
|
- [ ] GitHub workflows or `.github` docs
|
||||||
|
|
||||||
Note that we can help you achieve those standards, just try your best and be brave.
|
## If This PR Adds/Edits README Entries
|
||||||
We'll guide you to the best of our abilities.
|
|
||||||
|
|
||||||
To be on the list, it would be **nice** if entries adhere to these quality standards:
|
- Category/section touched:
|
||||||
|
- New or updated project links:
|
||||||
|
|
||||||
- It should take less than 20 sec to find what is the project, how to install it and how to use it.
|
## Validation
|
||||||
- Generally useful to the community.
|
|
||||||
- A project on GitHub with a well documented `README.md` file and plenty of examples is considered high quality.
|
|
||||||
- Clearly stating if an entry is related to (Linux) containers and not to Docker. There is an [awesome list](https://github.com/Friz-zy/awesome-linux-containers) for that.
|
|
||||||
- Clearly stating "what is it" i.e. which category it belongs to.
|
|
||||||
- Clearly stating "what is it for" i.e. mention a real problem it solves (even a small one). Make it clear for the next person.
|
|
||||||
- If it is a **WIP** (work in progress, not safe for production), please mention it. (Remember the time before Docker 1.0 ? ;-) )
|
|
||||||
- Always put the link to the GitHub project instead of the website!
|
|
||||||
|
|
||||||
To be on the list, the project **must** have:
|
- [ ] `make lint`
|
||||||
|
- [ ] `make test` (if Go code changed)
|
||||||
|
- [ ] `./awesome-docker check` (if `GITHUB_TOKEN` available)
|
||||||
|
|
||||||
- How to setup/install the project
|
## Contributor Checklist
|
||||||
- How to use the project (examples)
|
|
||||||
|
|
||||||
If your PR is not merged, we will tell you why so that you may be able to improve it.
|
|
||||||
But usually, we are pretty relaxed people, so just come and say hi, we'll figure something out together.
|
|
||||||
|
|
||||||
# Rules for Pull Request
|
|
||||||
|
|
||||||
- Each item should be limited to one link, no duplicates, no redirection (careful with `http` vs `https`!)
|
|
||||||
- The link should be the name of the package or project or website
|
|
||||||
- Description should be clear and concise (read it out loud to be sure)
|
|
||||||
- Description should follow the link, on the same line
|
|
||||||
- Entries are listed alphabetically, please respect the order
|
|
||||||
- If you want to add more than one link, please don't do all PR on the exact same line, it usually results in conflicts and your PR cannot be automatically merged...
|
|
||||||
|
|
||||||
Please contribute links to packages/projects you have used or are familiar with. This will help ensure high-quality entries.
|
|
||||||
|
|
||||||
|
- [ ] Entries are alphabetically ordered in their section
|
||||||
|
- [ ] Links point to project repositories (no duplicates or redirects)
|
||||||
|
- [ ] Descriptions are concise and specific
|
||||||
|
- [ ] Archived/deprecated projects were removed instead of tagged
|
||||||
|
- [ ] Used `:yen:` only when applicable
|
||||||
|
|||||||
21
.github/config.yml
vendored
21
.github/config.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
# Configuration for welcome - https://github.com/behaviorbot/welcome
|
|
||||||
|
|
||||||
# Configuration for new-issue-welcome - https://github.com/behaviorbot/new-issue-welcome
|
|
||||||
|
|
||||||
# Comment to be posted to on first time issues
|
|
||||||
newIssueWelcomeComment: >
|
|
||||||
Thanks for opening your first issue here!
|
|
||||||
|
|
||||||
# Configuration for new-pr-welcome - https://github.com/behaviorbot/new-pr-welcome
|
|
||||||
|
|
||||||
# Comment to be posted to on PRs from first time contributors in your repository
|
|
||||||
newPRWelcomeComment: >
|
|
||||||
Thank you for contributing. Please check out our contributing guidelines and welcome!
|
|
||||||
|
|
||||||
# Configuration for first-pr-merge - https://github.com/behaviorbot/first-pr-merge
|
|
||||||
|
|
||||||
# Comment to be posted to on pull requests merged by a first time user
|
|
||||||
firstPRMergeComment: >
|
|
||||||
Congrats on merging your first pull request!
|
|
||||||
|
|
||||||
# It is recommend to include as many gifs and emojis as possible
|
|
||||||
18
.github/dependabot.yml
vendored
18
.github/dependabot.yml
vendored
@@ -1,11 +1,13 @@
|
|||||||
# To get started with Dependabot version updates, you'll need to specify which
|
|
||||||
# package ecosystems to update and where the package manifests are located.
|
|
||||||
# Please see the documentation for all configuration options:
|
|
||||||
# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
|
|
||||||
|
|
||||||
version: 2
|
version: 2
|
||||||
updates:
|
updates:
|
||||||
- package-ecosystem: "npm" # See documentation for possible values
|
# Enable version updates for Go modules
|
||||||
directory: "/" # Location of package manifests
|
- package-ecosystem: "gomod"
|
||||||
|
directory: "/"
|
||||||
schedule:
|
schedule:
|
||||||
interval: "daily"
|
interval: "weekly"
|
||||||
|
|
||||||
|
# Enable version updates for GitHub Actions
|
||||||
|
- package-ecosystem: "github-actions"
|
||||||
|
directory: "/"
|
||||||
|
schedule:
|
||||||
|
interval: "weekly"
|
||||||
|
|||||||
7
.github/weekly-digest.yml
vendored
7
.github/weekly-digest.yml
vendored
@@ -1,7 +0,0 @@
|
|||||||
# Configuration for weekly-digest - https://github.com/apps/weekly-digest
|
|
||||||
publishDay: sun
|
|
||||||
canPublishIssues: true
|
|
||||||
canPublishPullRequests: true
|
|
||||||
canPublishContributors: true
|
|
||||||
canPublishStargazers: true
|
|
||||||
canPublishCommits: true
|
|
||||||
73
.github/workflows/broken_links.yml
vendored
73
.github/workflows/broken_links.yml
vendored
@@ -2,43 +2,34 @@ name: Broken Links Report
|
|||||||
|
|
||||||
on:
|
on:
|
||||||
schedule:
|
schedule:
|
||||||
# Run every Saturday at 2 AM UTC
|
|
||||||
- cron: "0 2 * * 6"
|
- cron: "0 2 * * 6"
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: broken-links-${{ github.ref }}
|
||||||
|
cancel-in-progress: false
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
check-links:
|
check-links:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 30
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
issues: write
|
issues: write
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # ratchet:actions/checkout@v5.0.0
|
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # ratchet:actions/checkout@v6
|
||||||
|
|
||||||
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # ratchet:actions/setup-node@v5.0.0
|
- uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # ratchet:actions/setup-go@v6
|
||||||
with:
|
with:
|
||||||
node-version: lts/*
|
go-version-file: go.mod
|
||||||
|
|
||||||
- uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # ratchet:actions/cache@v4.3.0
|
- name: Build
|
||||||
with:
|
run: go build -o awesome-docker ./cmd/awesome-docker
|
||||||
path: ~/.npm
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Dependencies
|
|
||||||
run: npm ci --ignore-scripts --no-audit --no-progress --prefer-offline
|
|
||||||
|
|
||||||
- name: Run Link Check
|
- name: Run Link Check
|
||||||
id: link_check
|
id: link_check
|
||||||
run: |
|
run: ./awesome-docker ci broken-links --issue-file broken_links_issue.md --github-output "$GITHUB_OUTPUT"
|
||||||
npm test > link_check_output.txt 2>&1 || true
|
|
||||||
if grep -q "❌ ERROR" link_check_output.txt; then
|
|
||||||
echo "has_errors=true" >> $GITHUB_OUTPUT
|
|
||||||
else
|
|
||||||
echo "has_errors=false" >> $GITHUB_OUTPUT
|
|
||||||
fi
|
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
@@ -48,34 +39,8 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
script: |
|
script: |
|
||||||
const fs = require('fs');
|
const fs = require('fs');
|
||||||
const output = fs.readFileSync('link_check_output.txt', 'utf8');
|
const issueBody = fs.readFileSync('broken_links_issue.md', 'utf8');
|
||||||
|
|
||||||
// Extract error information
|
|
||||||
const errorMatch = output.match(/❌ ERROR[\s\S]*$/);
|
|
||||||
const errorInfo = errorMatch ? errorMatch[0] : 'Link check failed - see workflow logs';
|
|
||||||
|
|
||||||
const issueBody = `# 🔗 Broken Links Detected
|
|
||||||
|
|
||||||
The weekly link check has found broken or inaccessible links in the repository.
|
|
||||||
|
|
||||||
## Error Details
|
|
||||||
|
|
||||||
\`\`\`
|
|
||||||
${errorInfo}
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
## Action Required
|
|
||||||
|
|
||||||
Please review and fix the broken links above. Options:
|
|
||||||
- Update the URL if the resource moved
|
|
||||||
- Remove the entry if it's permanently unavailable
|
|
||||||
- Add to \`tests/exclude_in_test.json\` if it's a known false positive
|
|
||||||
|
|
||||||
---
|
|
||||||
*Auto-generated by [broken_links.yml](https://github.com/veggiemonk/awesome-docker/blob/master/.github/workflows/broken_links.yml)*
|
|
||||||
`;
|
|
||||||
|
|
||||||
// Check for existing issue
|
|
||||||
const issues = await github.rest.issues.listForRepo({
|
const issues = await github.rest.issues.listForRepo({
|
||||||
owner: context.repo.owner,
|
owner: context.repo.owner,
|
||||||
repo: context.repo.repo,
|
repo: context.repo.repo,
|
||||||
@@ -91,16 +56,14 @@ jobs:
|
|||||||
issue_number: issues.data[0].number,
|
issue_number: issues.data[0].number,
|
||||||
body: issueBody
|
body: issueBody
|
||||||
});
|
});
|
||||||
console.log(`Updated issue #${issues.data[0].number}`);
|
|
||||||
} else {
|
} else {
|
||||||
const issue = await github.rest.issues.create({
|
await github.rest.issues.create({
|
||||||
owner: context.repo.owner,
|
owner: context.repo.owner,
|
||||||
repo: context.repo.repo,
|
repo: context.repo.repo,
|
||||||
title: '🔗 Broken Links Detected - Action Required',
|
title: 'Broken Links Detected',
|
||||||
body: issueBody,
|
body: issueBody,
|
||||||
labels: ['broken-links', 'bug']
|
labels: ['broken-links', 'bug']
|
||||||
});
|
});
|
||||||
console.log(`Created issue #${issue.data.number}`);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
- name: Close Issue if No Errors
|
- name: Close Issue if No Errors
|
||||||
@@ -115,7 +78,6 @@ jobs:
|
|||||||
labels: 'broken-links',
|
labels: 'broken-links',
|
||||||
per_page: 1
|
per_page: 1
|
||||||
});
|
});
|
||||||
|
|
||||||
if (issues.data.length > 0) {
|
if (issues.data.length > 0) {
|
||||||
await github.rest.issues.update({
|
await github.rest.issues.update({
|
||||||
owner: context.repo.owner,
|
owner: context.repo.owner,
|
||||||
@@ -124,11 +86,4 @@ jobs:
|
|||||||
state: 'closed',
|
state: 'closed',
|
||||||
state_reason: 'completed'
|
state_reason: 'completed'
|
||||||
});
|
});
|
||||||
await github.rest.issues.createComment({
|
|
||||||
owner: context.repo.owner,
|
|
||||||
repo: context.repo.repo,
|
|
||||||
issue_number: issues.data[0].number,
|
|
||||||
body: '✅ All links are now working! Closing this issue.'
|
|
||||||
});
|
|
||||||
console.log(`Closed issue #${issues.data[0].number}`);
|
|
||||||
}
|
}
|
||||||
|
|||||||
14
.github/workflows/deploy-pages.yml
vendored
14
.github/workflows/deploy-pages.yml
vendored
@@ -20,19 +20,17 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # ratchet:actions/checkout@v5
|
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # ratchet:actions/checkout@v6
|
||||||
|
|
||||||
- name: Setup Node.js
|
- uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # ratchet:actions/setup-go@v6
|
||||||
uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # ratchet:actions/setup-node@v5
|
|
||||||
with:
|
with:
|
||||||
node-version-file: '.nvmrc'
|
go-version-file: go.mod
|
||||||
cache: 'npm'
|
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Build CLI
|
||||||
run: npm ci
|
run: go build -o awesome-docker ./cmd/awesome-docker
|
||||||
|
|
||||||
- name: Build website
|
- name: Build website
|
||||||
run: npm run build
|
run: make website
|
||||||
|
|
||||||
- name: Upload artifact
|
- name: Upload artifact
|
||||||
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # ratchet:actions/upload-pages-artifact@v4
|
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # ratchet:actions/upload-pages-artifact@v4
|
||||||
|
|||||||
58
.github/workflows/health_report.yml
vendored
58
.github/workflows/health_report.yml
vendored
@@ -2,56 +2,46 @@ name: Weekly Health Report
|
|||||||
|
|
||||||
on:
|
on:
|
||||||
schedule:
|
schedule:
|
||||||
# Run every Monday at 9 AM UTC
|
|
||||||
- cron: "0 9 * * 1"
|
- cron: "0 9 * * 1"
|
||||||
workflow_dispatch: # Allow manual trigger
|
workflow_dispatch:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: health-report-${{ github.ref }}
|
||||||
|
cancel-in-progress: false
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
health-check:
|
health-check:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 30
|
||||||
permissions:
|
permissions:
|
||||||
contents: write
|
contents: read
|
||||||
issues: write
|
issues: write
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # ratchet:actions/checkout@v5.0.0
|
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # ratchet:actions/checkout@v6
|
||||||
|
|
||||||
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # ratchet:actions/setup-node@v5.0.0
|
- uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # ratchet:actions/setup-go@v6
|
||||||
with:
|
with:
|
||||||
node-version: lts/*
|
go-version-file: go.mod
|
||||||
|
|
||||||
- uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # ratchet:actions/cache@v4.3.0
|
- name: Build
|
||||||
with:
|
run: go build -o awesome-docker ./cmd/awesome-docker
|
||||||
path: ~/.npm
|
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Dependencies
|
- name: Run Health + Report
|
||||||
run: npm ci --ignore-scripts --no-audit --no-progress --prefer-offline
|
id: report
|
||||||
|
run: ./awesome-docker ci health-report --issue-file health_report.txt --github-output "$GITHUB_OUTPUT"
|
||||||
- name: Run Health Check
|
|
||||||
run: node tests/health_check.mjs
|
|
||||||
continue-on-error: true
|
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Upload Health Report
|
- name: Create/Update Issue with Health Report
|
||||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # ratchet:actions/upload-artifact@v4
|
if: steps.report.outputs.has_report == 'true'
|
||||||
with:
|
|
||||||
name: health-report
|
|
||||||
path: HEALTH_REPORT.md
|
|
||||||
|
|
||||||
- name: Create Issue with Health Report
|
|
||||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # ratchet:actions/github-script@v8
|
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # ratchet:actions/github-script@v8
|
||||||
with:
|
with:
|
||||||
script: |
|
script: |
|
||||||
const fs = require('fs');
|
const fs = require('fs');
|
||||||
|
const report = fs.readFileSync('health_report.txt', 'utf8');
|
||||||
|
const issueBody = report;
|
||||||
|
|
||||||
// Read the health report
|
|
||||||
const report = fs.readFileSync('HEALTH_REPORT.md', 'utf8');
|
|
||||||
|
|
||||||
// Check if there's already an open issue
|
|
||||||
const issues = await github.rest.issues.listForRepo({
|
const issues = await github.rest.issues.listForRepo({
|
||||||
owner: context.repo.owner,
|
owner: context.repo.owner,
|
||||||
repo: context.repo.repo,
|
repo: context.repo.repo,
|
||||||
@@ -60,25 +50,19 @@ jobs:
|
|||||||
per_page: 1
|
per_page: 1
|
||||||
});
|
});
|
||||||
|
|
||||||
const issueBody = report + '\n\n---\n*This report is auto-generated weekly. See [health_check.mjs](https://github.com/veggiemonk/awesome-docker/blob/master/tests/health_check.mjs) for details.*';
|
|
||||||
|
|
||||||
if (issues.data.length > 0) {
|
if (issues.data.length > 0) {
|
||||||
// Update existing issue
|
|
||||||
await github.rest.issues.update({
|
await github.rest.issues.update({
|
||||||
owner: context.repo.owner,
|
owner: context.repo.owner,
|
||||||
repo: context.repo.repo,
|
repo: context.repo.repo,
|
||||||
issue_number: issues.data[0].number,
|
issue_number: issues.data[0].number,
|
||||||
body: issueBody
|
body: issueBody
|
||||||
});
|
});
|
||||||
console.log(`Updated issue #${issues.data[0].number}`);
|
|
||||||
} else {
|
} else {
|
||||||
// Create new issue
|
await github.rest.issues.create({
|
||||||
const issue = await github.rest.issues.create({
|
|
||||||
owner: context.repo.owner,
|
owner: context.repo.owner,
|
||||||
repo: context.repo.repo,
|
repo: context.repo.repo,
|
||||||
title: '🏥 Weekly Health Report - Repository Maintenance Needed',
|
title: 'Weekly Health Report - Repository Maintenance Needed',
|
||||||
body: issueBody,
|
body: issueBody,
|
||||||
labels: ['health-report', 'maintenance']
|
labels: ['health-report', 'maintenance']
|
||||||
});
|
});
|
||||||
console.log(`Created issue #${issue.data.number}`);
|
|
||||||
}
|
}
|
||||||
|
|||||||
25
.github/workflows/pull_request.yml
vendored
25
.github/workflows/pull_request.yml
vendored
@@ -11,22 +11,19 @@ jobs:
|
|||||||
test:
|
test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # ratchet:actions/checkout@v5.0.0
|
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # ratchet:actions/checkout@v6
|
||||||
- uses: actions/setup-node@a0853c24544627f65ddf259abe73b1d18a591444 # ratchet:actions/setup-node@v5.0.0
|
|
||||||
with:
|
|
||||||
node-version: lts/*
|
|
||||||
|
|
||||||
- uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # ratchet:actions/cache@v4.3.0
|
- uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # ratchet:actions/setup-go@v6
|
||||||
id: cache
|
|
||||||
with:
|
with:
|
||||||
path: ~/.npm
|
go-version-file: go.mod
|
||||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-node-
|
|
||||||
|
|
||||||
- name: Install Dependencies
|
- name: Build
|
||||||
# if: steps.cache.outputs.cache-hit != 'true'
|
run: go build -o awesome-docker ./cmd/awesome-docker
|
||||||
run: npm ci --ignore-scripts --no-audit --no-progress --prefer-offline
|
|
||||||
- run: npm run test-pr
|
- name: Build website
|
||||||
|
run: ./awesome-docker build
|
||||||
|
|
||||||
|
- name: Validate
|
||||||
|
run: ./awesome-docker validate
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -10,3 +10,7 @@ website/table.html
|
|||||||
|
|
||||||
.idea
|
.idea
|
||||||
**/.DS_Store
|
**/.DS_Store
|
||||||
|
.worktrees
|
||||||
|
|
||||||
|
# Go
|
||||||
|
/awesome-docker
|
||||||
|
|||||||
89
AGENTS.md
89
AGENTS.md
@@ -1,28 +1,79 @@
|
|||||||
# Agent Guidelines for awesome-docker
|
# Agent Guidelines for awesome-docker
|
||||||
|
|
||||||
## Commands
|
## Commands
|
||||||
- Build website: `npm run build` (converts README.md to website/index.html)
|
- Build CLI: `make build` (or `go build -o awesome-docker ./cmd/awesome-docker`)
|
||||||
- Test all links: `npm test` (runs tests/test_all.mjs, requires GITHUB_TOKEN)
|
- Rebuild from scratch: `make rebuild`
|
||||||
- Test PR changes: `npm run test-pr` (runs tests/pull_request.mjs, checks duplicates)
|
- Show local workflows: `make help`
|
||||||
- Health check: `npm run health-check` (generates HEALTH_REPORT.md, requires GITHUB_TOKEN)
|
- Format Go code: `make fmt`
|
||||||
|
- Run tests: `make test` (runs `go test ./internal/... -v`)
|
||||||
|
- Race tests: `make test-race`
|
||||||
|
- Lint README rules: `make lint` (runs `./awesome-docker lint`)
|
||||||
|
- Auto-fix lint issues: `make lint-fix`
|
||||||
|
- Check links: `make check` (runs `./awesome-docker check`; `GITHUB_TOKEN` enables GitHub repo checks)
|
||||||
|
- PR-safe link checks: `make check-pr`
|
||||||
|
- PR validation: `make validate` (lint + external link checks in PR mode)
|
||||||
|
- Build website: `make website` (generates `website/index.html` from `README.md`)
|
||||||
|
- Health scoring: `make health` (requires `GITHUB_TOKEN`, refreshes `config/health_cache.yaml`)
|
||||||
|
- Print health report (Markdown): `make report`
|
||||||
|
- Print health report (JSON): `make report-json` or `./awesome-docker report --json`
|
||||||
|
- Generate report files: `make report-file` (`HEALTH_REPORT.md`) and `make report-json-file` (`HEALTH_REPORT.json`)
|
||||||
|
- Maintenance shortcut: `make workflow-maint` (health + JSON report file)
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
- **Main content**: README.md - curated list of Docker resources (markdown format)
|
- **Main content**: `README.md` (curated Docker/container resources)
|
||||||
- **Build script**: build.js - converts README.md to HTML using showdown & cheerio
|
- **CLI entrypoint**: `cmd/awesome-docker/main.go` (Cobra commands)
|
||||||
- **Tests**: tests/*.mjs - link validation, duplicate detection, URL checking
|
- **Core packages**:
|
||||||
- **Website**: website/ - static site deployment folder
|
- `internal/parser` - parse README sections and entries
|
||||||
|
- `internal/linter` - alphabetical/order/format validation + autofix
|
||||||
|
- `internal/checker` - HTTP and GitHub link checks
|
||||||
|
- `internal/scorer` - repository health scoring and report generation
|
||||||
|
- `internal/cache` - exclude list and health cache read/write
|
||||||
|
- `internal/builder` - render README to website HTML from template
|
||||||
|
- **Config**:
|
||||||
|
- `config/exclude.yaml` - known link-check exclusions
|
||||||
|
- `config/website.tmpl.html` - HTML template for site generation
|
||||||
|
- `config/health_cache.yaml` - persisted health scoring cache
|
||||||
|
- **Generated outputs**:
|
||||||
|
- `awesome-docker` - compiled CLI binary
|
||||||
|
- `website/index.html` - generated website
|
||||||
|
- `HEALTH_REPORT.md` - generated markdown report
|
||||||
|
- `HEALTH_REPORT.json` - generated JSON report
|
||||||
|
|
||||||
## Code Style
|
## Code Style
|
||||||
- **Language**: Node.js with ES modules (.mjs) for tests, CommonJS for build.js
|
- **Language**: Go
|
||||||
- **Imports**: Use ES6 imports in .mjs files, require() in .js files
|
- **Formatting**: Keep code `gofmt`-clean
|
||||||
- **Error handling**: Use try/catch with LOG.error() and process.exit(1) for failures
|
- **Testing**: Add/adjust table-driven tests in `internal/*_test.go` for behavior changes
|
||||||
- **Logging**: Use LOG object with error/debug methods (see build.js for pattern)
|
- **Error handling**: Return wrapped errors (`fmt.Errorf("context: %w", err)`) from command handlers
|
||||||
- **Async**: Prefer async/await over callbacks
|
- **CLI conventions**: Keep command behavior consistent with existing Cobra commands (`lint`, `check`, `health`, `build`, `report`, `validate`)
|
||||||
|
|
||||||
|
## CI/Automation
|
||||||
|
- **PR + weekly validation**: `.github/workflows/pull_request.yml`
|
||||||
|
- Triggers on pull requests to `master` and weekly schedule
|
||||||
|
- Builds Go CLI and runs `./awesome-docker validate`
|
||||||
|
- **Weekly broken links issue**: `.github/workflows/broken_links.yml`
|
||||||
|
- Runs `./awesome-docker check`
|
||||||
|
- Opens/updates `broken-links` issue when failures are found
|
||||||
|
- **Weekly health report issue**: `.github/workflows/health_report.yml`
|
||||||
|
- Runs `./awesome-docker health` then `./awesome-docker report`
|
||||||
|
- Opens/updates `health-report` issue
|
||||||
|
- **GitHub Pages deploy**: `.github/workflows/deploy-pages.yml`
|
||||||
|
- On push to `master`, builds CLI, runs `./awesome-docker build`, deploys `website/`
|
||||||
|
|
||||||
|
## Makefile Workflow
|
||||||
|
- The `Makefile` models file dependencies for generated artifacts (`awesome-docker`, `website/index.html`, `config/health_cache.yaml`, `HEALTH_REPORT.md`, `HEALTH_REPORT.json`).
|
||||||
|
- Prefer `make` targets over ad-hoc command sequences so dependency and regeneration behavior stays consistent.
|
||||||
|
- Use:
|
||||||
|
- `make workflow-dev` for local iteration
|
||||||
|
- `make workflow-pr` before opening/updating a PR
|
||||||
|
- `make workflow-maint` for health/report maintenance
|
||||||
|
- `make workflow-ci` for CI-equivalent local checks
|
||||||
|
|
||||||
## Content Guidelines (from CONTRIBUTING.md)
|
## Content Guidelines (from CONTRIBUTING.md)
|
||||||
- Link to GitHub projects, not websites
|
- Use one link per entry
|
||||||
- Entries are listed alphabetically (from A to Z)
|
- Prefer project/repository URLs over marketing pages
|
||||||
- Entries must be Docker/container-related with clear documentation
|
- Keep entries alphabetically ordered within each section
|
||||||
- Include project description, installation, and usage examples
|
- Keep descriptions concise and concrete
|
||||||
- Mark WIP projects explicitly
|
- Use `:yen:` only for paid/commercial services
|
||||||
- Avoid outdated tutorials/blog posts unless advanced/specific
|
- Use `:ice_cube:` for stale projects (2+ years inactive)
|
||||||
|
- Remove archived/deprecated projects instead of tagging them
|
||||||
|
- Avoid duplicate links and redirect variants
|
||||||
|
|||||||
145
Makefile
Normal file
145
Makefile
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
SHELL := /bin/bash
|
||||||
|
|
||||||
|
BINARY ?= awesome-docker
|
||||||
|
GO ?= go
|
||||||
|
CMD_PACKAGE := ./cmd/awesome-docker
|
||||||
|
INTERNAL_PACKAGES := ./internal/...
|
||||||
|
WEBSITE_OUTPUT := website/index.html
|
||||||
|
HEALTH_CACHE := config/health_cache.yaml
|
||||||
|
HEALTH_REPORT_MD := HEALTH_REPORT.md
|
||||||
|
HEALTH_REPORT_JSON := HEALTH_REPORT.json
|
||||||
|
|
||||||
|
GO_SOURCES := $(shell find cmd internal -type f -name '*.go')
|
||||||
|
BUILD_INPUTS := $(GO_SOURCES) go.mod go.sum
|
||||||
|
WEBSITE_INPUTS := README.md config/website.tmpl.html
|
||||||
|
HEALTH_INPUTS := README.md config/exclude.yaml
|
||||||
|
|
||||||
|
.DEFAULT_GOAL := help
|
||||||
|
|
||||||
|
.PHONY: help \
|
||||||
|
build rebuild clean \
|
||||||
|
fmt test test-race \
|
||||||
|
lint lint-fix check check-pr validate website \
|
||||||
|
guard-github-token health health-cache \
|
||||||
|
report report-json report-file report-json-file health-report \
|
||||||
|
workflow-dev workflow-pr workflow-maint workflow-ci
|
||||||
|
|
||||||
|
help: ## Show the full local workflow and available targets
|
||||||
|
@echo "awesome-docker Makefile"
|
||||||
|
@echo
|
||||||
|
@echo "Workflows:"
|
||||||
|
@echo " make workflow-dev # local iteration (fmt + test + lint + check-pr + website)"
|
||||||
|
@echo " make workflow-pr # recommended before opening/updating a PR"
|
||||||
|
@echo " make workflow-maint # repository maintenance (health + JSON report)"
|
||||||
|
@echo " make workflow-ci # CI-equivalent checks"
|
||||||
|
@echo
|
||||||
|
@echo "Core targets:"
|
||||||
|
@echo " make build # build CLI binary"
|
||||||
|
@echo " make test # run internal Go tests"
|
||||||
|
@echo " make lint # validate README formatting/content rules"
|
||||||
|
@echo " make check # check links (uses GITHUB_TOKEN when set)"
|
||||||
|
@echo " make validate # run PR validation (lint + check --pr)"
|
||||||
|
@echo " make website # generate website/index.html"
|
||||||
|
@echo " make report-file # generate HEALTH_REPORT.md"
|
||||||
|
@echo " make report-json-file# generate HEALTH_REPORT.json"
|
||||||
|
@echo " make health # refresh health cache (requires GITHUB_TOKEN)"
|
||||||
|
@echo " make report # print markdown health report"
|
||||||
|
@echo " make report-json # print full JSON health report"
|
||||||
|
@echo
|
||||||
|
@echo "Generated artifacts:"
|
||||||
|
@echo " $(BINARY)"
|
||||||
|
@echo " $(WEBSITE_OUTPUT)"
|
||||||
|
@echo " $(HEALTH_CACHE)"
|
||||||
|
@echo " $(HEALTH_REPORT_MD)"
|
||||||
|
@echo " $(HEALTH_REPORT_JSON)"
|
||||||
|
|
||||||
|
$(BINARY): $(BUILD_INPUTS)
|
||||||
|
$(GO) build -o $(BINARY) $(CMD_PACKAGE)
|
||||||
|
|
||||||
|
build: $(BINARY) ## Build CLI binary
|
||||||
|
|
||||||
|
rebuild: clean build ## Rebuild from scratch
|
||||||
|
|
||||||
|
clean: ## Remove generated binary
|
||||||
|
rm -f $(BINARY) $(HEALTH_REPORT_MD) $(HEALTH_REPORT_JSON)
|
||||||
|
|
||||||
|
fmt: ## Format Go code
|
||||||
|
$(GO) fmt ./...
|
||||||
|
|
||||||
|
test: ## Run internal unit tests
|
||||||
|
$(GO) test $(INTERNAL_PACKAGES) -v
|
||||||
|
|
||||||
|
test-race: ## Run internal tests with race detector
|
||||||
|
$(GO) test $(INTERNAL_PACKAGES) -race
|
||||||
|
|
||||||
|
lint: build ## Validate README formatting/content rules
|
||||||
|
./$(BINARY) lint
|
||||||
|
|
||||||
|
lint-fix: build ## Auto-fix lint issues when possible
|
||||||
|
./$(BINARY) lint --fix
|
||||||
|
|
||||||
|
check: build ## Check links (GitHub checks enabled when GITHUB_TOKEN is set)
|
||||||
|
./$(BINARY) check
|
||||||
|
|
||||||
|
check-pr: build ## Check links in PR mode (external links only)
|
||||||
|
./$(BINARY) check --pr
|
||||||
|
|
||||||
|
validate: build ## Run PR validation (lint + check --pr)
|
||||||
|
./$(BINARY) validate
|
||||||
|
|
||||||
|
$(WEBSITE_OUTPUT): $(BINARY) $(WEBSITE_INPUTS)
|
||||||
|
./$(BINARY) build
|
||||||
|
|
||||||
|
website: $(WEBSITE_OUTPUT) ## Generate website from README
|
||||||
|
|
||||||
|
guard-github-token:
|
||||||
|
@if [ -z "$$GITHUB_TOKEN" ]; then \
|
||||||
|
echo "GITHUB_TOKEN is required for this target."; \
|
||||||
|
echo "Set it with: export GITHUB_TOKEN=<token>"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
$(HEALTH_CACHE): guard-github-token $(BINARY) $(HEALTH_INPUTS)
|
||||||
|
./$(BINARY) health
|
||||||
|
|
||||||
|
health-cache: $(HEALTH_CACHE) ## Update config/health_cache.yaml
|
||||||
|
|
||||||
|
health: ## Refresh health cache from GitHub metadata
|
||||||
|
@$(MAKE) --no-print-directory -B health-cache
|
||||||
|
|
||||||
|
report: build ## Print markdown health report from cache
|
||||||
|
./$(BINARY) report
|
||||||
|
|
||||||
|
report-json: build ## Print full health report as JSON
|
||||||
|
./$(BINARY) report --json
|
||||||
|
|
||||||
|
$(HEALTH_REPORT_MD): $(BINARY) $(HEALTH_CACHE)
|
||||||
|
./$(BINARY) report > $(HEALTH_REPORT_MD)
|
||||||
|
|
||||||
|
report-file: $(HEALTH_REPORT_MD) ## Generate HEALTH_REPORT.md from cache
|
||||||
|
|
||||||
|
$(HEALTH_REPORT_JSON): $(BINARY) $(HEALTH_CACHE)
|
||||||
|
./$(BINARY) report --json > $(HEALTH_REPORT_JSON)
|
||||||
|
|
||||||
|
report-json-file: $(HEALTH_REPORT_JSON) ## Generate HEALTH_REPORT.json from cache
|
||||||
|
|
||||||
|
health-report: health report-file ## Refresh health cache then generate HEALTH_REPORT.md
|
||||||
|
|
||||||
|
browse: build ## Launch interactive TUI browser
|
||||||
|
./$(BINARY) browse
|
||||||
|
|
||||||
|
workflow-dev: fmt test lint check-pr website ## Full local development workflow
|
||||||
|
|
||||||
|
workflow-pr: fmt test validate ## Recommended workflow before opening a PR
|
||||||
|
|
||||||
|
workflow-maint: health report-json-file ## Weekly maintenance workflow
|
||||||
|
|
||||||
|
workflow-ci: test validate ## CI-equivalent validation workflow
|
||||||
|
|
||||||
|
update-ga:
|
||||||
|
ratchet upgrade .github/workflows/*
|
||||||
|
|
||||||
|
update-go:
|
||||||
|
go get -u go@latest
|
||||||
|
go get -u ./...
|
||||||
|
go mod tidy
|
||||||
712
README.md
712
README.md
@@ -1,4 +1,4 @@
|
|||||||
# Awesome Docker [][sindresorhus] [](https://app.netlify.com/sites/awesome-docker/deploys)[](https://www.trackawesomelist.com/veggiemonk/awesome-docker/)[](https://github.com/veggiemonk/awesome-docker/commits/main)<!-- omit in toc -->
|
# Awesome Docker [][sindresorhus] [](https://www.trackawesomelist.com/veggiemonk/awesome-docker/)[](https://github.com/veggiemonk/awesome-docker/commits/main)<!-- omit in toc -->
|
||||||
|
|
||||||
> A curated list of Docker resources and projects
|
> A curated list of Docker resources and projects
|
||||||
|
|
||||||
@@ -13,8 +13,6 @@ If this list is not complete, you can [contribute][editreadme] to make it so. He
|
|||||||
|
|
||||||
The creators and maintainers of this list do not receive any form of payment to accept a change made by any contributor. This page is not an official Docker product in any way. It is a list of links to projects and is maintained by volunteers. Everybody is welcome to contribute. The goal of this repo is to index open-source projects, not to advertise for profit.
|
The creators and maintainers of this list do not receive any form of payment to accept a change made by any contributor. This page is not an official Docker product in any way. It is a list of links to projects and is maintained by volunteers. Everybody is welcome to contribute. The goal of this repo is to index open-source projects, not to advertise for profit.
|
||||||
|
|
||||||
All the links are monitored and tested with a home baked [Node.js script](https://github.com/veggiemonk/awesome-docker/blob/master/.github/workflows/pull_request.yml)
|
|
||||||
|
|
||||||
# Contents <!-- omit in toc -->
|
# Contents <!-- omit in toc -->
|
||||||
|
|
||||||
<!-- TOC -->
|
<!-- TOC -->
|
||||||
@@ -59,7 +57,7 @@ All the links are monitored and tested with a home baked [Node.js script](https:
|
|||||||
- [Serverless](#serverless)
|
- [Serverless](#serverless)
|
||||||
- [Testing](#testing)
|
- [Testing](#testing)
|
||||||
- [Wrappers](#wrappers)
|
- [Wrappers](#wrappers)
|
||||||
- [Services based on Docker (mostly :heavy\_dollar\_sign:)](#services-based-on-docker-mostly-heavy_dollar_sign)
|
- [Services based on Docker (mostly :yen:)](#services-based-on-docker-mostly-yen)
|
||||||
- [CI Services](#ci-services)
|
- [CI Services](#ci-services)
|
||||||
- [CaaS](#caas)
|
- [CaaS](#caas)
|
||||||
- [Monitoring Services](#monitoring-services)
|
- [Monitoring Services](#monitoring-services)
|
||||||
@@ -81,8 +79,8 @@ All the links are monitored and tested with a home baked [Node.js script](https:
|
|||||||
|
|
||||||
# Legend
|
# Legend
|
||||||
|
|
||||||
- Beta :construction:
|
- Monetized :yen:
|
||||||
- Monetized :heavy_dollar_sign:
|
- Stale (2+ years inactive) :ice_cube:
|
||||||
|
|
||||||
# What is Docker
|
# What is Docker
|
||||||
|
|
||||||
@@ -93,21 +91,25 @@ _Source:_ [What is Docker](https://www.docker.com/why-docker/)
|
|||||||
# Where to start
|
# Where to start
|
||||||
|
|
||||||
- [Benefits of using Docker](https://semaphore.io/blog/docker-benefits) for development and delivery, with a practical roadmap for adoption.
|
- [Benefits of using Docker](https://semaphore.io/blog/docker-benefits) for development and delivery, with a practical roadmap for adoption.
|
||||||
|
- [Bootstrapping Microservices](https://www.manning.com/books/bootstrapping-microservices-with-docker-kubernetes-and-terraform) - A practical and project-based guide to building applications with microservices, starts by building a Docker image for a single microservice and publishing it to a private container registry, finishes by deploying a complete microservices application to a production Kubernetes cluster.
|
||||||
- [Docker Curriculum](https://github.com/prakhar1989/docker-curriculum): A comprehensive tutorial for getting started with Docker. Teaches how to use Docker and deploy dockerized apps on AWS with Elastic Beanstalk and Elastic Container Service.
|
- [Docker Curriculum](https://github.com/prakhar1989/docker-curriculum): A comprehensive tutorial for getting started with Docker. Teaches how to use Docker and deploy dockerized apps on AWS with Elastic Beanstalk and Elastic Container Service.
|
||||||
- [Docker Documentation](https://docs.docker.com/): the official documentation.
|
- [Docker Documentation](https://docs.docker.com/): the official documentation.
|
||||||
- [Docker for beginners](https://github.com/groda/big_data/blob/master/docker_for_beginners.md): A tutorial for beginners who need to learn the basics of Docker—from "Hello world!" to basic interactions with containers, with simple explanations of the underlying concepts.
|
- [Docker for beginners](https://github.com/groda/big_data/blob/master/docker_for_beginners.md): A tutorial for beginners who need to learn the basics of Docker—from "Hello world!" to basic interactions with containers, with simple explanations of the underlying concepts.
|
||||||
- [Docker for novices](https://www.youtube.com/watch?v=xsjSadjKXns) An introduction to Docker for developers and testers who have never used it. (Video 1h40, recorded linux.conf.au 2019 — Christchurch, New Zealand) by Alex Clews.
|
- [Docker for novices](https://www.youtube.com/watch?v=xsjSadjKXns) An introduction to Docker for developers and testers who have never used it. (Video 1h40, recorded linux.conf.au 2019 — Christchurch, New Zealand) by Alex Clews.
|
||||||
|
|
||||||
- [Docker katas](https://github.com/eficode-academy/docker-katas) A series of labs that will take you from "Hello Docker" to deploying a containerized web application to a server.
|
- [Docker katas](https://github.com/eficode-academy/docker-katas) A series of labs that will take you from "Hello Docker" to deploying a containerized web application to a server.
|
||||||
- [Docker simplified in 55 seconds](https://www.youtube.com/watch?v=vP_4DlOH1G4): An animated high-level introduction to Docker. Think of it as a visual tl;dr that makes it easier to dive into more complex learning materials.
|
- [Docker simplified in 55 seconds](https://www.youtube.com/watch?v=vP_4DlOH1G4): An animated high-level introduction to Docker. Think of it as a visual tl;dr that makes it easier to dive into more complex learning materials.
|
||||||
- [Docker Training](https://training.mirantis.com) :heavy_dollar_sign:
|
- [Docker Training](https://training.mirantis.com) :yen:
|
||||||
|
- [Dockerlings](https://github.com/furkan/dockerlings): Learn docker from inside your terminal, with a modern TUI and bite sized exercises (by [furkan](https://github.com/furkan))
|
||||||
|
|
||||||
- [Introduction à Docker](https://blog.stephane-robert.info/docs/conteneurs/moteurs-conteneurs/docker/) A dedicated section to master Docker on a French site about DevSecOps: From the basics to best practices, including optimizing, securing your containers...
|
- [Introduction à Docker](https://blog.stephane-robert.info/docs/conteneurs/moteurs-conteneurs/docker/) A dedicated section to master Docker on a French site about DevSecOps: From the basics to best practices, including optimizing, securing your containers...
|
||||||
- [Learn Docker](https://github.com/dwyl/learn-docker): step-by-step tutorial and more resources (video, articles, cheat sheets) by [dwyl](https://github.com/dwyl)
|
- [Learn Docker](https://github.com/dwyl/learn-docker): step-by-step tutorial and more resources (video, articles, cheat sheets) by [dwyl](https://github.com/dwyl)
|
||||||
- [Learn Docker (Visually)](https://pagertree.com/learn/docker/overview) - A beginner-focused high-level overview of all the major components of Docker and how they fit together. Lots of high-quality images, examples, and resources.
|
- [Learn Docker (Visually)](https://pagertree.com/learn/docker/overview) - A beginner-focused high-level overview of all the major components of Docker and how they fit together. Lots of high-quality images, examples, and resources.
|
||||||
|
- [Play With Docker](https://training.play-with-docker.com/): PWD is a great way to get started with Docker from beginner to advanced users. Docker runs directly in your browser.
|
||||||
- [Practical Guide about Docker Commands in Spanish](https://github.com/brunocascio/docker-espanol) This Spanish guide contains the use of basic docker commands with real life examples.
|
- [Practical Guide about Docker Commands in Spanish](https://github.com/brunocascio/docker-espanol) This Spanish guide contains the use of basic docker commands with real life examples.
|
||||||
- [Setting Python Development Environment with VScode and Docker](https://github.com/RamiKrispin/vscode-python): A step-by-step tutorial for setting up a dockerized Python development environment with VScode, Docker, and the Dev Container extension.
|
- [Setting Python Development Environment with VScode and Docker](https://github.com/RamiKrispin/vscode-python): A step-by-step tutorial for setting up a dockerized Python development environment with VScode, Docker, and the Dev Container extension.
|
||||||
- [The Docker Handbook](https://docker-handbook.farhan.dev/) An open-source book that teaches you the fundamentals, best practices and some intermediate Docker functionalities. The book is hosted on [fhsinchy/the-docker-handbook](https://github.com/fhsinchy/the-docker-handbook) and the projects are hosted on [fhsinchy/docker-handbook-projects](https://github.com/fhsinchy/docker-handbook-projects) repository.
|
- [The Docker Handbook](https://docker-handbook.farhan.dev/) An open-source book that teaches you the fundamentals, best practices and some intermediate Docker functionalities. The book is hosted on [fhsinchy/the-docker-handbook](https://github.com/fhsinchy/the-docker-handbook) and the projects are hosted on [fhsinchy/docker-handbook-projects](https://github.com/fhsinchy/docker-handbook-projects) repository.
|
||||||
|
|
||||||
|
|
||||||
**Cheatsheets** by
|
**Cheatsheets** by
|
||||||
|
|
||||||
- [eon01](https://github.com/eon01/DockerCheatSheet)
|
- [eon01](https://github.com/eon01/DockerCheatSheet)
|
||||||
@@ -122,7 +124,7 @@ _Source:_ [What is Docker](https://www.docker.com/why-docker/)
|
|||||||
- [Docker with Microsoft SQL 2016 + ASP.NET](https://blog.alexellis.io/docker-does-sql2016-aspnet/) Demonstration running ASP.NET and SQL Server workloads in Docker
|
- [Docker with Microsoft SQL 2016 + ASP.NET](https://blog.alexellis.io/docker-does-sql2016-aspnet/) Demonstration running ASP.NET and SQL Server workloads in Docker
|
||||||
- [Exploring ASP.NET Core with Docker in both Linux and Windows Containers](https://www.hanselman.com/blog/exploring-aspnet-core-with-docker-in-both-linux-and-windows-containers) Running ASP.NET Core apps in Linux and Windows containers, using [Docker for Windows][docker-for-windows]
|
- [Exploring ASP.NET Core with Docker in both Linux and Windows Containers](https://www.hanselman.com/blog/exploring-aspnet-core-with-docker-in-both-linux-and-windows-containers) Running ASP.NET Core apps in Linux and Windows containers, using [Docker for Windows][docker-for-windows]
|
||||||
- [Running a Legacy ASP.NET App in a Windows Container](https://blog.sixeyed.com/dockerizing-nerd-dinner-part-1-running-a-legacy-asp-net-app-in-a-windows-container/) Steps for Dockerizing a legacy ASP.NET app and running as a Windows container
|
- [Running a Legacy ASP.NET App in a Windows Container](https://blog.sixeyed.com/dockerizing-nerd-dinner-part-1-running-a-legacy-asp-net-app-in-a-windows-container/) Steps for Dockerizing a legacy ASP.NET app and running as a Windows container
|
||||||
- [Windows Containers and Docker: The 101](https://www.youtube.com/watch?v=N7SG2wEyQtM) :movie_camera: - A 20-minute overview, using Docker to run PowerShell, ASP.NET Core and ASP.NET apps
|
- [Windows Containers and Docker: The 101](https://www.youtube.com/watch?v=N7SG2wEyQtM) - A 20-minute overview, using Docker to run PowerShell, ASP.NET Core and ASP.NET apps.
|
||||||
- [Windows Containers Quick Start](https://learn.microsoft.com/en-us/virtualization/windowscontainers/about/) Overview of Windows containers, drilling down to Quick Starts for Windows 10 and Windows Server 2016
|
- [Windows Containers Quick Start](https://learn.microsoft.com/en-us/virtualization/windowscontainers/about/) Overview of Windows containers, drilling down to Quick Starts for Windows 10 and Windows Server 2016
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -144,173 +146,180 @@ _Source:_ [What is Docker](https://www.docker.com/why-docker/)
|
|||||||
|
|
||||||
### Container Composition
|
### Container Composition
|
||||||
|
|
||||||
- [Capitan](https://github.com/byrnedo/capitan) - Composable docker orchestration with added scripting support by [byrnedo].
|
- [Capitan](https://github.com/byrnedo/capitan) :ice_cube: - Composable docker orchestration with added scripting support by [byrnedo].
|
||||||
- [Composerize](https://github.com/magicmark/composerize) - Convert docker run commands into docker-compose files
|
- [Composerize](https://github.com/magicmark/composerize) - Convert docker run commands into docker-compose files.
|
||||||
- [crowdr](https://github.com/polonskiy/crowdr) - Tool for managing multiple Docker containers (`docker-compose` alternative) by [polonskiy](https://github.com/polonskiy/)
|
- [crowdr](https://github.com/polonskiy/crowdr) :ice_cube: - Tool for managing multiple Docker containers (`docker-compose` alternative).
|
||||||
- [ctk](https://github.com/ctk-hq/ctk) :construction: - Visual composer for container based workloads. By [corpulent](https://github.com/corpulent)
|
- [ctk](https://github.com/ctk-hq/ctk) - Visual composer for container based workloads. By [corpulent](https://github.com/corpulent).
|
||||||
- [docker-config-update](https://github.com/sudo-bmitch/docker-config-update) - Utility to update docker configs and secrets for deploying in a compose file by [sudo-bmitch](https://github.com/sudo-bmitch)
|
- [docker-config-update](https://github.com/sudo-bmitch/docker-config-update) :ice_cube: - Utility to update docker configs and secrets for deploying in a compose file.
|
||||||
- [elsy](https://github.com/cisco/elsy) - An opinionated, multi-language, build tool based on Docker and Docker Compose
|
- [elsy](https://github.com/cisco/elsy) :ice_cube: - An opinionated, multi-language, build tool based on Docker and Docker Compose.
|
||||||
- [habitus](https://github.com/cloud66-oss/habitus) - A Build Flow Tool for Docker by [cloud66](https://github.com/cloud66)
|
- [habitus](https://github.com/cloud66-oss/habitus) :ice_cube: - A Build Flow Tool for Docker.
|
||||||
- [kompose](https://github.com/kubernetes/kompose) - Go from Docker Compose to Kubernetes
|
- [kompose](https://github.com/kubernetes/kompose) - Go from Docker Compose to Kubernetes.
|
||||||
|
- [LLM Harbor](https://github.com/av/harbor) - A CLI and companion app to effortlessly run LLM backends, APIs, frontends, and services with one command. By [av](https://github.com/av).
|
||||||
- [plash](https://github.com/ihucos/plash) - A container run and build engine - runs inside docker.
|
- [plash](https://github.com/ihucos/plash) - A container run and build engine - runs inside docker.
|
||||||
- [podman-compose](https://github.com/containers/podman-compose) - a script to run docker-compose.yml using podman by [containers][containers]
|
- [podman-compose](https://github.com/containers/podman-compose) - A script to run docker-compose.yml using podman.
|
||||||
- [Smalte](https://github.com/roquie/smalte) – Dynamically configure applications that require static configuration in docker container. By [roquie](https://github.com/roquie)
|
- [Smalte](https://github.com/roquie/smalte) – Dynamically configure applications that require static configuration in docker container. By [roquie](https://github.com/roquie)
|
||||||
- [Stitchocker](https://github.com/alexaandrov/stitchocker) - A lightweight and fast command line utility for conveniently grouping your docker-compose multiple container services. By [alexaandrov](https://github.com/alexaandrov)
|
- [Stitchocker](https://github.com/alexaandrov/stitchocker) - A lightweight and fast command line utility for conveniently grouping your docker-compose multiple container services. By [alexaandrov](https://github.com/alexaandrov).
|
||||||
|
|
||||||
### Deployment and Infrastructure
|
### Deployment and Infrastructure
|
||||||
|
|
||||||
- [awesome-stacks](https://github.com/ethibox/awesome-stacks) - Deploy 150+ open-source web apps with one Docker command
|
- [awesome-stacks](https://github.com/ethibox/awesome-stacks) - Deploy 150+ open-source web apps with one Docker command.
|
||||||
- [blackfish](https://gitlab.com/blackfish/blackfish) - a CoreOS VM to build swarm clusters for Dev & Production by [blackfish](https://gitlab.com/blackfish/)
|
- [blackfish](https://gitlab.com/blackfish/blackfish) - A CoreOS VM to build swarm clusters for Dev & Production.
|
||||||
- [BosnD](https://gitlab.com/n0r1sk/bosnd) - BosnD, the boatswain daemon - A dynamic configuration file writer & service reloader for dynamically changing container environments.
|
- [BosnD](https://gitlab.com/n0r1sk/bosnd) - BosnD, the boatswain daemon - A dynamic configuration file writer & service reloader for dynamically changing container environments.
|
||||||
- [Clocker](https://github.com/brooklyncentral/clocker) - Clocker creates and manages a Docker cloud infrastructure. By [brooklyncentral](https://github.com/brooklyncentral)
|
- [Clocker](https://github.com/brooklyncentral/clocker) :ice_cube: - Clocker creates and manages a Docker cloud infrastructure. Clocker supports single-click deployments and runtime management of multi-node applications that run as containers distributed across multiple hosts, on both Docker and Marathon. It leverages [Calico][calico] and [Weave][weave] for networking and [Brooklyn](https://brooklyn.apache.org/) for application blueprints. By [brooklyncentral](https://github.com/brooklyncentral).
|
||||||
- [Conduit](https://github.com/ehazlett/conduit) - Experimental deployment system for Docker by [ehazlett](https://github.com/ehazlett)
|
- [Conduit](https://github.com/ehazlett/conduit) :ice_cube: - Experimental deployment system for Docker.
|
||||||
- [depcon](https://github.com/ContainX/depcon) - Depcon is written in Go and allows you to easily deploy Docker containers to Apache Mesos/Marathon, Amazon ECS and Kubernetes. By [ContainX][containx]
|
- [depcon](https://github.com/ContainX/depcon) :ice_cube: - Depcon is written in Go and allows you to easily deploy Docker containers to Apache Mesos/Marathon, Amazon ECS and Kubernetes. By [ContainX][containx].
|
||||||
- [docker-to-iac](https://github.com/deploystackio/docker-to-iac) - Translate docker run and commit into Infrastructure as Code templates for AWS, Render.com and DigitalOcean by [DeployStack](https://github.com/deploystackio)
|
- [docker-to-iac](https://github.com/deploystackio/docker-to-iac) - Translate docker run and commit into Infrastructure as Code templates for AWS, Render.com and DigitalOcean.
|
||||||
- [gitkube](https://github.com/hasura/gitkube) - Gitkube is a tool for building and deploying docker images on Kubernetes using `git push`. By [Hasura](https://github.com/hasura/).
|
- [gitkube](https://github.com/hasura/gitkube) :ice_cube: - Gitkube is a tool for building and deploying docker images on Kubernetes using `git push`. By [Hasura](https://github.com/hasura/).
|
||||||
- [Grafeas](https://github.com/grafeas/grafeas) - A common API for metadata about containers, from image and build details to security vulnerabilities. By [grafeas](https://github.com/grafeas)
|
- [Grafeas](https://github.com/grafeas/grafeas) - A common API for metadata about containers, from image and build details to security vulnerabilities. By [grafeas](https://github.com/grafeas).
|
||||||
- [swarm-ansible](https://github.com/LombardiDaniel/swarm-ansible) - Swarm-Ansible bootstraps a production-ready swarm cluster using ansible. Comes with tools to automate CI, help monitoring and traefik pre-configured for SSL certificates and simple-auth. Comes with a private registry and more!
|
- [swarm-ansible](https://github.com/LombardiDaniel/swarm-ansible?tab=readme-ov-file) - Swarm-Ansible bootstraps a production-ready swarm cluster using ansible. Comes with tools to automate CI, help monitoring and traefik pre-configured for SSL certificates and simple-auth. Comes with a private registry and more!.
|
||||||
- [SwarmManagement](https://github.com/hansehe/SwarmManagement) - Swarm Management is a python application, installed with pip. The application makes it easy to manage a Docker Swarm by configuring a single yaml file describing which stacks to deploy, and which networks, configs or secrets to create.
|
- [SwarmManagement](https://github.com/hansehe/SwarmManagement) - Swarm Management is a python application, installed with pip. The application makes it easy to manage a Docker Swarm by configuring a single yaml file describing which stacks to deploy, and which networks, configs or secrets to create.
|
||||||
- [werf](https://github.com/werf/werf) - werf is a CI/CD tool for building Docker images efficiently and deploying them to Kubernetes using GitOps by [flant](https://github.com/flant)
|
- [werf](https://github.com/werf/werf) - Werf is a CI/CD tool for building Docker images efficiently and deploying them to Kubernetes using GitOps.
|
||||||
|
|
||||||
### Monitoring
|
### Monitoring
|
||||||
|
|
||||||
|
- [ADRG](https://github.com/jaldertech/adrg) - Dynamic Docker resource governor using cgroups v2 to manage system load.
|
||||||
- [Autoheal](https://github.com/willfarrell/docker-autoheal) - Monitor and restart unhealthy docker containers automatically.
|
- [Autoheal](https://github.com/willfarrell/docker-autoheal) - Monitor and restart unhealthy docker containers automatically.
|
||||||
- [Axibase Collector](https://axibase.com/docs/axibase-collector/) - Axibase Collector streams performance counters, configuration changes and lifecycle events from the Docker engine(s) into Axibase Time Series Database for roll-up dashboards and integration with upstream monitoring systems.
|
- [cAdvisor](https://github.com/google/cadvisor) - Analyzes resource usage and performance characteristics of running containers.
|
||||||
- [cAdvisor](https://github.com/google/cadvisor) - Analyzes resource usage and performance characteristics of running containers. Created by [Google][google]
|
|
||||||
- [Checkmate](https://github.com/bluewave-labs/checkmate) - Checkmate is an open-source, self-hosted tool designed to track and monitor server hardware, uptime, response times, and incidents in real-time with beautiful visualizations.
|
- [Checkmate](https://github.com/bluewave-labs/checkmate) - Checkmate is an open-source, self-hosted tool designed to track and monitor server hardware, uptime, response times, and incidents in real-time with beautiful visualizations.
|
||||||
- [Docker-Alertd](https://github.com/deltaskelta/docker-alertd) - Monitor and send alerts based on docker container resource usage/statistics
|
- [DLIA](https://github.com/zorak1103/dlia) - DLIA is an AI-powered Docker log monitoring agent that uses Large Language Models (LLMs) to intelligently analyze container logs, detect anomalies, and provide contextual insights over time. By [zorak1103](https://github.com/zorak1103).
|
||||||
- [Docker-Flow-Monitor](https://github.com/docker-flow/docker-flow-monitor) - Reconfigures Prometheus when a new service is updated or deployed automatically by [docker-flow][docker-flow]
|
- [Docker-Alertd](https://github.com/deltaskelta/docker-alertd) :ice_cube: - Monitor and send alerts based on docker container resource usage/statistics.
|
||||||
|
- [Docker-Flow-Monitor](https://github.com/docker-flow/docker-flow-monitor) :ice_cube: - Reconfigures Prometheus when a new service is updated or deployed automatically.
|
||||||
|
- [DockProbe](https://github.com/deep-on/dockprobe) - Lightweight Docker monitoring dashboard in a single container. Real-time metrics, 6 anomaly detection rules, Telegram alerts, and 16 automated security scans. Zero config, ~50MB RAM. By [DeepOn](https://github.com/deep-on).
|
||||||
- [DockProc](https://gitlab.com/n0r1sk/dockproc) - I/O monitoring for containers on processlevel.
|
- [DockProc](https://gitlab.com/n0r1sk/dockproc) - I/O monitoring for containers on processlevel.
|
||||||
- [dockprom](https://github.com/stefanprodan/dockprom) - Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager by [stefanprodan](https://github.com/stefanprodan)
|
- [dockprom](https://github.com/stefanprodan/dockprom) - Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager.
|
||||||
- [Doku](https://github.com/amerkurev/doku) - Doku is a simple web-based application that allows you to monitor Docker disk usage. [amerkurev](https://github.com/amerkurev)
|
- [Doku](https://github.com/amerkurev/doku) - Doku is a simple web-based application that allows you to monitor Docker disk usage. [amerkurev](https://github.com/amerkurev).
|
||||||
- [Dozzle](dozzle) - Monitor container logs in real-time with a browser or mobile device. [amir20](https://github.com/amir20)
|
- [Dozzle](dozzle) - Monitor container logs in real-time with a browser or mobile device. [amir20](https://github.com/amir20).
|
||||||
- [Dynatrace](https://www.dynatrace.com/solutions/container-monitoring/) :heavy_dollar_sign: - Monitor containerized applications without installing agents or modifying your Run commands
|
- [Drydock](https://github.com/CodesWhat/drydock) - Container update monitoring with web dashboard, 23 registry providers, 20 notification triggers, and distributed agent architecture. By [CodesWhat](https://github.com/CodesWhat).
|
||||||
- [Glances](https://github.com/nicolargo/glances) - A cross-platform curses-based system monitoring tool written in Python by [nicolargo](https://github.com/nicolargo)
|
- [Dynatrace](https://docs.dynatrace.com/docs/observe/infrastructure-observability/container-platform-monitoring) - :yen: Monitor containerized applications without installing agents or modifying your Run commands.
|
||||||
- [Grafana Docker Dashboard Template](https://grafana.com/grafana/dashboards/179-docker-prometheus-monitoring/) - A template for your Docker, Grafana and Prometheus stack [vegasbrianc][vegasbrianc]
|
- [Glances](https://github.com/nicolargo/glances) - A cross-platform curses-based system monitoring tool written in Python.
|
||||||
|
- [Grafana Docker Dashboard Template](https://grafana.com/grafana/dashboards/179-docker-prometheus-monitoring/) - A template for your Docker, Grafana and Prometheus stack [vegasbrianc][vegasbrianc].
|
||||||
- [HertzBeat](https://github.com/dromara/hertzbeat) - An open-source real-time monitoring system with custom-monitor and agentless.
|
- [HertzBeat](https://github.com/dromara/hertzbeat) - An open-source real-time monitoring system with custom-monitor and agentless.
|
||||||
- [InfluxDB, cAdvisor, Grafana](https://github.com/vegasbrianc/docker-monitoring) - InfluxDB Time series DB in combination with Grafana and cAdvisor by [vegasbrianc][vegasbrianc]
|
|
||||||
- [Logspout](https://github.com/gliderlabs/logspout) - Log routing for Docker container logs by [gliderlabs][gliderlabs]
|
|
||||||
- [monit-docker](https://github.com/decryptus/monit-docker) - Monitor docker containers resources usage or status and execute docker commands or inside containers. [decryptus][decryptus]
|
|
||||||
- [NexClipper](https://github.com/NexClipper/NexClipper) - NexClipper is the container monitoring and performance management solution specialized in Docker, Apache Mesos, Marathon, DC/OS, Mesosphere, Kubernetes by [Nexclipper](https://github.com/NexClipper)
|
|
||||||
- [Out-of-the-box Host/Container Monitoring/Logging/Alerting Stack](https://github.com/uschtwill/docker_monitoring_logging_alerting) - Docker host and container monitoring, logging and alerting out of the box using cAdvisor, Prometheus, Grafana for monitoring, Elasticsearch, Kibana and Logstash for logging and elastalert and Alertmanager for alerting. Set up in 5 Minutes. Secure mode for production use with built-in [Automated Nginx Reverse Proxy (jwilder's)][nginxproxy].
|
|
||||||
- [Sidekick](https://github.com/runsidekick/sidekick) 💲 - Open source live application debugger like Chrome DevTools for your backend. Collect traces and generate logs on-demand without stopping & redeploying your applications.
|
|
||||||
- [SuperVisor CPM](https://t0xic0der.medium.com/simply-accessible-container-performance-monitoring-with-supervisor-7fb47f925f3b) [Frontend Service](https://github.com/t0xic0der/supervisor-frontend-service/) and [Driver Service](https://github.com/t0xic0der/supervisor-driver-service/) :construction: - A simple and accessible FOSS container performance monitoring service written in Python by [t0xic0der](https://github.com/t0xic0der/)
|
|
||||||
- [SwarmAlert](https://github.com/gpulido/SwarmAlert) - Monitors a Docker Swarm and sends Pushover alerts when it finds a container with no healthy service task running.
|
|
||||||
- [Zabbix Docker module](https://github.com/monitoringartist/Zabbix-Docker-Monitoring) - Zabbix module that provides discovery of running containers, CPU/memory/blk IO/net container metrics. Systemd Docker and LXC execution driver is also supported. It's a dynamically linked shared object library, so its performance is (~10x) better, than any script solution.
|
|
||||||
- [Zabbix Docker](https://github.com/gomex/docker-zabbix) - Monitor containers automatically using zabbix LLD feature.
|
|
||||||
|
|
||||||
|
- [InfluxDB, cAdvisor, Grafana](https://github.com/vegasbrianc/docker-monitoring) :ice_cube: - InfluxDB Time series DB in combination with Grafana and cAdvisor.
|
||||||
|
- [Logspout](https://github.com/gliderlabs/logspout) :ice_cube: - Log routing for Docker container logs.
|
||||||
|
- [Maintenant](https://github.com/kolapsis/maintenant) - Self-discovering infrastructure monitoring for Docker and Kubernetes. Auto-detects containers via labels, with endpoint monitoring, heartbeats, TLS certificates, resource metrics, update intelligence, and a built-in status page. Single binary with embedded SPA. By [kolapsis](https://github.com/kolapsis).
|
||||||
|
- [monit-docker](https://github.com/decryptus/monit-docker) :ice_cube: - Monitor docker containers resources usage or status and execute docker commands or inside containers. [decryptus][decryptus].
|
||||||
|
- [NexClipper](https://github.com/NexClipper/NexClipper) :ice_cube: - NexClipper is the container monitoring and performance management solution specialized in Docker, Apache Mesos, Marathon, DC/OS, Mesosphere, Kubernetes.
|
||||||
|
- [Out-of-the-box Host/Container Monitoring/Logging/Alerting Stack](https://github.com/uschtwill/docker_monitoring_logging_alerting) :ice_cube: - Docker host and container monitoring, logging and alerting out of the box using cAdvisor, Prometheus, Grafana for monitoring, Elasticsearch, Kibana and Logstash for logging and elastalert and Alertmanager for alerting. Set up in 5 Minutes. Secure mode for production use with built-in [Automated Nginx Reverse Proxy (jwilder's)][nginxproxy].
|
||||||
|
- [Sidekick](https://github.com/runsidekick/sidekick) :ice_cube: - Open source live application debugger like Chrome DevTools for your backend. Collect traces and generate logs on-demand without stopping & redeploying your applications.
|
||||||
|
- [SwarmAlert](https://github.com/gpulido/SwarmAlert) :ice_cube: - Monitors a Docker Swarm and sends Pushover alerts when it finds a container with no healthy service task running.
|
||||||
|
- [Zabbix Docker](https://github.com/gomex/docker-zabbix) :ice_cube: - Monitor containers automatically using zabbix LLD feature.
|
||||||
|
- [Zabbix Docker module](https://github.com/monitoringartist/Zabbix-Docker-Monitoring) :ice_cube: - Zabbix module that provides discovery of running containers, CPU/memory/blk IO/net container metrics. Systemd Docker and LXC execution driver is also supported. It's a dynamically linked shared object library, so its performance is (~10x) better, than any script solution.
|
||||||
### Networking
|
### Networking
|
||||||
|
|
||||||
- [Calico][https://github.com/projectcalico/calico] - Calico is a pure layer 3 virtual network that allows containers over multiple docker-hosts to talk to each other. By [projectcalico](https://github.com/projectcalico)
|
- [Calico][calico] - Calico is a pure layer 3 virtual network that allows containers over multiple docker-hosts to talk to each other.
|
||||||
- [Flannel](https://github.com/flannel-io/flannel/) - Flannel is a virtual network that gives a subnet to each host for use with container runtimes. By [flannel-io][https://github.com/flannel-io]
|
- [Flannel](https://github.com/coreos/flannel/) - Flannel is a virtual network that gives a subnet to each host for use with container runtimes. By [coreos][coreos].
|
||||||
- [Freeflow](https://github.com/Microsoft/Freeflow) - High performance container overlay networks on Linux. Enabling RDMA (on both InfiniBand and RoCE) and accelerating TCP to bare metal performance. By [Microsoft](https://github.com/Microsoft)
|
- [Freeflow](https://github.com/Microsoft/Freeflow) :ice_cube: - High performance container overlay networks on Linux. Enabling RDMA (on both InfiniBand and RoCE) and accelerating TCP to bare metal performance. By [Microsoft](https://github.com/Microsoft).
|
||||||
- [MyIP](https://github.com/jason5ng32/MyIP) - All in one IP Toolbox. Easy to check all your IPs, IP geolocation, check for DNS leaks, examine WebRTC connections, speed test, ping test, MTR test, check website availability, whois search and more. By [jason5ng32](https://github.com/jason5ng32)
|
- [MyIP](https://github.com/jason5ng32/MyIP) - All in one IP Toolbox. Easy to check all your IPs, IP geolocation, check for DNS leaks, examine WebRTC connections, speed test, ping test, MTR test, check website availability, whois search and more. By [jason5ng32](https://github.com/jason5ng32).
|
||||||
- [netshoot](https://github.com/nicolaka/netshoot) - The netshoot container has a powerful set of networking tools to help troubleshoot Docker networking issues by [nicolaka](https://github.com/nicolaka)
|
- [netshoot](https://github.com/nicolaka/netshoot) - The netshoot container has a powerful set of networking tools to help troubleshoot Docker networking issues.
|
||||||
- [Weave][https://github.com/weaveworks/weave] - Weave creates a virtual network that connects Docker containers deployed across multiple hosts. By [weaveworks](https://github.com/weaveworks)
|
- [Pipework](https://github.com/jpetazzo/pipework) - Software-Defined Networking for Linux Containers, Pipework works with "plain" LXC containers, and with the awesome Docker. By [jpetazzo][jpetazzo].
|
||||||
|
|
||||||
### Orchestration
|
### Orchestration
|
||||||
|
|
||||||
- [Ansible Linux Docker](https://github.com/Peco602/ansible-linux-docker) - Run Ansible from a Linux container. By [Peco602][peco602]
|
- [Ansible Linux Docker](https://github.com/Peco602/ansible-linux-docker) :ice_cube: - Run Ansible from a Linux container. By [Peco602][peco602].
|
||||||
- [athena](https://github.com/athena-oss/athena) - An automation platform with a plugin architecture that allows you to easily create and share services.
|
- [athena](https://github.com/athena-oss/athena) :ice_cube: - An automation platform with a plugin architecture that allows you to easily create and share services.
|
||||||
- [CloudSlang](https://github.com/CloudSlang/cloud-slang) - CloudSlang is a workflow engine to create Docker process automation
|
- [CloudSlang](https://github.com/CloudSlang/cloud-slang) - CloudSlang is a workflow engine to create Docker process automation.
|
||||||
- [clusterdock](https://github.com/clusterdock/clusterdock) - Docker container orchestration to enable the testing of long-running cluster deployments
|
- [clusterdock](https://github.com/clusterdock/clusterdock) :ice_cube: - Docker container orchestration to enable the testing of long-running cluster deployments.
|
||||||
- [Crane](https://github.com/Dataman-Cloud/crane) - Control plane based on docker built-in swarm [Dataman-Cloud](https://github.com/Dataman-Cloud)
|
- [Crane](https://github.com/Dataman-Cloud/crane) :ice_cube: - Control plane based on docker built-in swarm [Dataman-Cloud](https://github.com/Dataman-Cloud).
|
||||||
- [Docker Flow Swarm Listener](https://github.com/docker-flow/docker-flow-swarm-listener) - Docker Flow Swarm Listener project is to listen to Docker Swarm events and send requests when a change occurs. By [docker-flow][docker-flow]
|
- [Docker Flow Swarm Listener](https://github.com/docker-flow/docker-flow-swarm-listener) :ice_cube: - Docker Flow Swarm Listener project is to listen to Docker Swarm events and send requests when a change occurs. By [docker-flow][docker-flow].
|
||||||
- [docker rollout](https://github.com/Wowu/docker-rollout) - Zero downtime deployment for Docker Compose services by [Wowu](https://github.com/Wowu)
|
- [docker rollout](https://github.com/Wowu/docker-rollout) - Zero downtime deployment for Docker Compose services.
|
||||||
- [Haven](https://github.com/codeabovelab/haven-platform) - Haven is a simplified container management platform that integrates container, application, cluster, image, and registry managements. By [codeabovelab](https://github.com/codeabovelab)
|
- [Haven](https://github.com/codeabovelab/haven-platform) :ice_cube: - Haven is a simplified container management platform that integrates container, application, cluster, image, and registry managements. By [codeabovelab](https://github.com/codeabovelab).
|
||||||
- [Kubernetes](https://github.com/kubernetes/kubernetes) - Open source orchestration system for Docker containers by Google
|
- [Kubernetes](https://github.com/kubernetes/kubernetes) - Open source orchestration system for Docker containers by Google.
|
||||||
- [ManageIQ](https://github.com/ManageIQ/manageiq) - Discover, optimize and control your hybrid IT. By [ManageIQ](https://github.com/ManageIQ)
|
- [ManageIQ](https://github.com/ManageIQ/manageiq) - Discover, optimize and control your hybrid IT. By [ManageIQ](https://github.com/ManageIQ).
|
||||||
- [Mesos](https://github.com/apache/mesos) - Resource/Job scheduler for containers, VM's and physical hosts [apache](https://mesos.apache.org/)
|
- [Mesos](https://github.com/apache/mesos) - Resource/Job scheduler for containers, VM's and physical hosts [apache](https://mesos.apache.org/).
|
||||||
- [Nebula](https://github.com/nebula-orchestrator) - A Docker orchestration tool designed to manage massive scale distributed clusters.
|
- [Nebula](https://github.com/nebula-orchestrator) - A Docker orchestration tool designed to manage massive scale distributed clusters.
|
||||||
- [Nomad](https://github.com/hashicorp/nomad) - Easily deploy applications at any scale. A Distributed, Highly Available, Datacenter-Aware Scheduler by [hashicorp](https://github.com/hashicorp)
|
- [Nomad](https://github.com/hashicorp/nomad) - Easily deploy applications at any scale. A Distributed, Highly Available, Datacenter-Aware Scheduler.
|
||||||
- [Rancher](https://github.com/rancher/rancher) - An open source project that provides a complete platform for operating Docker in production by [rancher][rancher].
|
- [Rancher](https://github.com/rancher/rancher) - An open source project that provides a complete platform for operating Docker in production.
|
||||||
- [RedHerd Framework](https://github.com/redherd-project/redherd-framework) - RedHerd is a collaborative and serverless framework for orchestrating a geographically distributed group of assets capable of simulating complex offensive cyberspace operations. By [RedHerdProject](https://github.com/redherd-project).
|
- [RedHerd Framework](https://github.com/redherd-project/redherd-framework) :ice_cube: - RedHerd is a collaborative and serverless framework for orchestrating a geographically distributed group of assets capable of simulating complex offensive cyberspace operations. By [RedHerdProject](https://github.com/redherd-project).
|
||||||
- [Swarm-cronjob](https://github.com/crazy-max/swarm-cronjob) - Create jobs on a time-based schedule on Swarm by [crazy-max]
|
- [Swarm-cronjob](https://github.com/crazy-max/swarm-cronjob) - Create jobs on a time-based schedule on Swarm by [crazy-max].
|
||||||
|
|
||||||
### PaaS
|
### PaaS
|
||||||
|
|
||||||
- [caprover](https://github.com/caprover/caprover) - [previously known as CaptainDuckDuck] Automated Scalable Webserver Package (automated Docker+nginx) - Heroku on Steroids
|
- [caprover](https://github.com/caprover/caprover) - [Previously known as CaptainDuckDuck] Automated Scalable Webserver Package (automated Docker+nginx) - Heroku on Steroids.
|
||||||
- [Convox Rack](https://github.com/convox/rack) - Convox Rack is open source PaaS built on top of expert infrastructure automation and devops best practices.
|
- [Convox Rack](https://github.com/convox/rack) - Convox Rack is open source PaaS built on top of expert infrastructure automation and devops best practices.
|
||||||
- [Dcw](https://github.com/pbertera/dcw) - Docker-compose SSH wrapper: a very poor man PaaS, exposing the docker-compose and custom-container commands defined in container labels.
|
- [Dcw](https://github.com/pbertera/dcw) :ice_cube: - Docker-compose SSH wrapper: a very poor man PaaS, exposing the docker-compose and custom-container commands defined in container labels.
|
||||||
- [Dokku](https://github.com/dokku/dokku) - Docker powered mini-Heroku that helps you build and manage the lifecycle of applications (originally by [progrium][progrium])
|
- [Dokku](https://github.com/dokku/dokku) - Docker powered mini-Heroku that helps you build and manage the lifecycle of applications (originally by [progrium][progrium]).
|
||||||
- [Empire](https://github.com/remind101/empire) - A PaaS built on top of Amazon EC2 Container Service (ECS)
|
- [Empire](https://github.com/remind101/empire) :ice_cube: - A PaaS built on top of Amazon EC2 Container Service (ECS).
|
||||||
- [Exoframe](https://github.com/exoframejs/exoframe) - A self-hosted tool that allows simple one-command deployments using Docker
|
- [Exoframe](https://github.com/exoframejs/exoframe) - A self-hosted tool that allows simple one-command deployments using Docker.
|
||||||
- [Hephy Workflow](https://github.com/teamhephy/workflow) - Open source PaaS for Kubernetes that adds a developer-friendly layer to any Kubernetes cluster, making it easy to deploy and manage applications. Fork of [Deis Workflow](https://github.com/deis/workflow)
|
- [Hephy Workflow](https://github.com/teamhephy/workflow) :ice_cube: - Open source PaaS for Kubernetes that adds a developer-friendly layer to any Kubernetes cluster, making it easy to deploy and manage applications. Fork of [Deis Workflow](https://github.com/deis/workflow).
|
||||||
- [Krane](https://github.com/krane/krane) - Toolset for managing container workloads on remote servers
|
- [Krane](https://github.com/krane/krane) :ice_cube: - Toolset for managing container workloads on remote servers.
|
||||||
- [Nanobox](https://github.com/nanobox-io/nanobox) :heavy_dollar_sign: - An application development platform that creates local environments that can then be deployed and scaled in the cloud.
|
- [Nanobox](https://github.com/nanobox-io/nanobox) :ice_cube: - :yen: An application development platform that creates local environments that can then be deployed and scaled in the cloud.
|
||||||
- [OpenShift][openshift] - An open source PaaS built on [Kubernetes][kubernetes] and optimized for Dockerized app development and deployment by [Red Hat](https://www.redhat.com/en)
|
- [OpenShift][openshift] - An open source PaaS built on [Kubernetes][kubernetes] and optimized for Dockerized app development and deployment by [Red Hat](https://www.redhat.com/en)
|
||||||
- [Tsuru](https://github.com/tsuru/tsuru) - Tsuru is an extensible and open source Platform as a Service software
|
- [Tsuru](https://github.com/tsuru/tsuru) - Tsuru is an extensible and open source Platform as a Service software.
|
||||||
|
|
||||||
### Reverse Proxy
|
### Reverse Proxy
|
||||||
|
|
||||||
- [bunkerized-nginx](https://github.com/bunkerity/bunkerized-nginx) - Web app hosting and reverse proxy secure by default. By [bunkerity](https://github.com/bunkerity)
|
- [BunkerWeb](https://github.com/bunkerity/bunkerweb) - Open-source and next-gen Web Application Firewall (WAF). By [Bunkerity](https://www.bunkerity.com).
|
||||||
- [caddy-docker-proxy](https://github.com/lucaslorentz/caddy-docker-proxy) - Caddy-based reverse proxy, configured with service or container labels. By [lucaslorentz](https://github.com/lucaslorentz)
|
- [caddy-docker-proxy](https://github.com/lucaslorentz/caddy-docker-proxy) - Caddy-based reverse proxy, configured with service or container labels. By [lucaslorentz](https://github.com/lucaslorentz).
|
||||||
- [caddy-docker-upstreams](https://github.com/invzhi/caddy-docker-upstreams) - Docker upstreams module for Caddy, configured with container labels. By [invzhi](https://github.com/invzhi)
|
- [caddy-docker-upstreams](https://github.com/invzhi/caddy-docker-upstreams) - Docker upstreams module for Caddy, configured with container labels. By [invzhi](https://github.com/invzhi).
|
||||||
- [Docker Dnsmasq Updater](https://github.com/moonbuggy/docker-dnsmasq-updater) - Update a remote dnsmasq server with Docker container hostnames.
|
- [Docker Dnsmasq Updater](https://github.com/moonbuggy/docker-dnsmasq-updater) - Update a remote dnsmasq server with Docker container hostnames.
|
||||||
- [docker-flow-proxy](https://github.com/docker-flow/docker-flow-proxy) - Reconfigures proxy every time a new service is deployed, or when a service is scaled. By [docker-flow][docker-flow]
|
- [docker-flow-proxy](https://github.com/docker-flow/docker-flow-proxy) - Reconfigures proxy every time a new service is deployed, or when a service is scaled. By [docker-flow][docker-flow].
|
||||||
- [fabio](https://github.com/fabiolb/fabio) - A fast, modern, zero-conf load balancing HTTP(S) router for deploying microservices managed by consul. By [magiconair](https://github.com/magiconair) (Frank Schroeder)
|
- [fabio](https://github.com/fabiolb/fabio) - A fast, modern, zero-conf load balancing HTTP(S) router for deploying microservices managed by consul. By [magiconair](https://github.com/magiconair) (Frank Schroeder).
|
||||||
- [Let's Encrypt Nginx-proxy Companion](https://github.com/nginx-proxy/docker-letsencrypt-nginx-proxy-companion) - A lightweight companion container for the nginx-proxy. It allow the creation/renewal of Let's Encrypt certificates automatically. By [JrCs](https://github.com/JrCs)
|
- [idle-less](https://github.com/tvup/idle-less) - Reverse proxy with automatic Wake-on-LAN — wakes sleeping backend servers when traffic arrives, shows a waiting screen, and redirects when ready. By [tvup](https://github.com/tvup).
|
||||||
- [Nginx Proxy Manager](https://github.com/jc21/nginx-proxy-manager) - A beautiful web interface for proxying web based services with SSL. By [jc21](https://github.com/jc21)
|
- [Let's Encrypt Nginx-proxy Companion](https://github.com/nginx-proxy/docker-letsencrypt-nginx-proxy-companion) - A lightweight companion container for the nginx-proxy. It allow the creation/renewal of Let's Encrypt certificates automatically. By [JrCs](https://github.com/JrCs).
|
||||||
|
- [mesh-router](https://github.com/Yundera/mesh-router) - Free domain(nsl.sh) provider for Docker containers with automatic HTTPS routing. Uses Wireguard VPN to securely route subdomain requests across networks. Ideal for self-hosted NAS and cloud deployments. By [Yundera](https://github.com/Yundera).
|
||||||
|
- [Nginx Proxy Manager](https://github.com/jc21/nginx-proxy-manager) - A beautiful web interface for proxying web based services with SSL. By [jc21](https://github.com/jc21).
|
||||||
- [nginx-proxy][nginxproxy] - Automated nginx proxy for Docker containers using docker-gen by [jwilder][jwilder]
|
- [nginx-proxy][nginxproxy] - Automated nginx proxy for Docker containers using docker-gen by [jwilder][jwilder]
|
||||||
- [OpenResty Manager](https://github.com/Safe3/openresty-manager) - The easiest using, powerful and beautiful OpenResty Manager(Nginx Enhanced Version), open source alternative to OpenResty Edge. By [Safe3](https://github.com/Safe3/)
|
- [OpenResty Manager](https://github.com/Safe3/openresty-manager) - The easiest using, powerful and beautiful OpenResty Manager(Nginx Enhanced Version), open source alternative to OpenResty Edge. By [Safe3](https://github.com/Safe3/).
|
||||||
- [Swarm Router](https://github.com/flavioaiello/swarm-router) - A «zero config» service name based router for docker swarm mode with a fresh and more secure approach. By [flavioaiello](https://github.com/flavioaiello)
|
- [Swarm Router](https://github.com/flavioaiello/swarm-router) - A «zero config» service name based router for docker swarm mode with a fresh and more secure approach. By [flavioaiello](https://github.com/flavioaiello).
|
||||||
- [Træfɪk](https://github.com/containous/traefik) - Automated reverse proxy and load-balancer for Docker, Mesos, Consul, Etcd... By [EmileVauge](https://github.com/emilevauge)
|
- [Træfɪk](https://github.com/containous/traefik) - Automated reverse proxy and load-balancer for Docker, Mesos, Consul, Etcd... By [EmileVauge](https://github.com/emilevauge).
|
||||||
|
|
||||||
### Runtime
|
### Runtime
|
||||||
|
|
||||||
- [cri-o](https://github.com/cri-o/cri-o) - Open Container Initiative-based implementation of Kubernetes Container Runtime Interface by [cri-o](https://github.com/cri-o)
|
- [cri-o](https://github.com/cri-o/cri-o) - Open Container Initiative-based implementation of Kubernetes Container Runtime Interface by [cri-o](https://github.com/cri-o).
|
||||||
- [lxc](https://github.com/lxc/lxc) - LXC - Linux Containers
|
- [lxc](https://github.com/lxc/lxc) - LXC - Linux Containers.
|
||||||
- [podman](https://github.com/containers/libpod) - libpod is a library used to create container pods. Home of Podman by [containers][containers]
|
- [podman](https://github.com/containers/libpod) - Libpod is a library used to create container pods. Home of Podman.
|
||||||
- [rlxc](https://github.com/brauner/rlxc) - LXC binary written in Rust by [brauner](https://github.com/brauner)
|
- [rlxc](https://github.com/brauner/rlxc) :ice_cube: - LXC binary written in Rust.
|
||||||
- [runtime-tools](https://github.com/opencontainers/runtime-tools) - oci-runtime-tool is a collection of tools for working with the OCI runtime specification by [opencontainers](https://github.com/opencontainers)
|
- [runtime-tools](https://github.com/opencontainers/runtime-tools) - Oci-runtime-tool is a collection of tools for working with the OCI runtime specification.
|
||||||
|
|
||||||
### Security
|
### Security
|
||||||
|
|
||||||
- [Anchor](https://github.com/SongStitch/anchor/) - A tool to ensure reproducible builds by pinning dependencies inside your Dockerfiles [SongStitch](https://github.com/songStitch/)
|
- [Anchor](https://github.com/SongStitch/anchor/) - A tool to ensure reproducible builds by pinning dependencies inside your Dockerfiles [SongStitch](https://github.com/songStitch/).
|
||||||
- [Anchor Enterprise](https://anchore.com/) :heavy_dollar_sign: - Analyze images for CVE vulnerabilities and against custom security policies by [Anchor](https://github.com/anchore)
|
- [Anchor Enterprise](https://anchore.com/) - :yen: Analyze images for CVE vulnerabilities and against custom security policies.
|
||||||
- [Aqua Security](https://www.aquasec.com) :heavy_dollar_sign: - Securing container-based applications from Dev to Production on any platform
|
- [Aqua Security](https://www.aquasec.com) - :yen: Securing container-based applications from Dev to Production on any platform.
|
||||||
- [bane](https://github.com/genuinetools/bane) - AppArmor profile generator for Docker containers by [genuinetools][genuinetools]
|
- [bane](https://github.com/genuinetools/bane) :ice_cube: - AppArmor profile generator for Docker containers.
|
||||||
- [CetusGuard](https://github.com/hectorm/cetusguard) - CetusGuard is a tool that protects the Docker daemon socket by filtering calls to its API endpoints
|
- [buildcage](https://github.com/dash14/buildcage) - Restricts outbound network access during Docker builds to prevent supply chain attacks, working as a drop-in BuildKit remote driver for Docker Buildx, with ready-to-use GitHub Actions.
|
||||||
- [CIS Docker Benchmark](https://github.com/dev-sec/cis-docker-benchmark) - This [InSpec][inspec] compliance profile implement the CIS Docker 1.12.0 Benchmark in an automated way to provide security best-practice tests around Docker daemon and containers in a production environment. By [dev-sec](https://github.com/dev-sec)
|
- [CetusGuard](https://github.com/hectorm/cetusguard) - CetusGuard is a tool that protects the Docker daemon socket by filtering calls to its API endpoints.
|
||||||
- [Checkov](https://github.com/bridgecrewio/checkov) - Static analysis for infrastructure as code manifests (Terraform, Kubernetes, Cloudformation, Helm, Dockerfile, Kustomize) find security misconfiguration and fix them. By [bridgecrew](https://github.com/bridgecrewio)
|
- [Checkov](https://github.com/bridgecrewio/checkov) - Static analysis for infrastructure as code manifests (Terraform, Kubernetes, Cloudformation, Helm, Dockerfile, Kustomize) find security misconfiguration and fix them. By [bridgecrew](https://github.com/bridgecrewio).
|
||||||
- [Clair](https://github.com/quay/clair) - Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers. By [coreos][coreos]
|
- [CIS Docker Benchmark](https://github.com/dev-sec/cis-docker-benchmark) :ice_cube: - This [InSpec][inspec] compliance profile implement the CIS Docker 1.12.0 Benchmark in an automated way to provide security best-practice tests around Docker daemon and containers in a production environment. By [dev-sec](https://github.com/dev-sec).
|
||||||
- [Dagda](https://github.com/eliasgranderubio/dagda) - Dagda is a tool to perform static analysis of known vulnerabilities, trojans, viruses, malware & other malicious threats in docker images/containers and to monitor the docker daemon and running docker containers for detecting anomalous activities. By [eliasgranderubio](https://github.com/eliasgranderubio)
|
- [Clair](https://github.com/quay/clair) - Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers. By [coreos][coreos].
|
||||||
- [Deepfence Enterprise](https://deepfence.io) :heavy_dollar_sign: - Full life cycle Cloud Native Workload Protection platform for kubernetes, virtual machines and serverless. By [deepfence][deepfence]
|
- [crowdsec-blocklist-import](https://github.com/wolffcatskyy/crowdsec-blocklist-import) - Aggregates 36 free threat intelligence feeds into 120k+ malicious IPs for CrowdSec bouncers, providing 10-20x more blocks than default lists. By [wolffcatskyy](https://github.com/wolffcatskyy).
|
||||||
- [Deepfence Threat Mapper](https://github.com/deepfence/ThreatMapper) - Powerful runtime vulnerability scanner for kubernetes, virtual machines and serverless. By [deepfence][deepfence]
|
- [Dagda](https://github.com/eliasgranderubio/dagda) :ice_cube: - Dagda is a tool to perform static analysis of known vulnerabilities, trojans, viruses, malware & other malicious threats in docker images/containers and to monitor the docker daemon and running docker containers for detecting anomalous activities. By [eliasgranderubio](https://github.com/eliasgranderubio).
|
||||||
- [docker-bench-security](https://github.com/docker/docker-bench-security) - script that checks for dozens of common best-practices around deploying Docker containers in production. By [docker][docker]
|
- [Deepfence Enterprise](https://deepfence.io) - :yen: Full life cycle Cloud Native Workload Protection platform for kubernetes, virtual machines and serverless. By [deepfence][deepfence].
|
||||||
- [docker-explorer](https://github.com/google/docker-explorer) - A tool to help forensicate offline docker acquisitions by [Google][google]
|
- [Deepfence Threat Mapper](https://github.com/deepfence/ThreatMapper) - Powerful runtime vulnerability scanner for kubernetes, virtual machines and serverless. By [deepfence][deepfence].
|
||||||
- [docker-lock](https://github.com/safe-waters/docker-lock) - A cli-plugin for docker to automatically manage image digests by tracking them in a separate Lockfile. By [safe-waters][safe-waters]
|
- [docker-bench-security](https://github.com/docker/docker-bench-security) - Script that checks for dozens of common best-practices around deploying Docker containers in production. By [docker][docker].
|
||||||
- [dvwassl](https://github.com/Peco602/dvwassl) - SSL-enabled Damn Vulnerable Web App to test Web Application Firewalls. By [Peco602][peco602]
|
- [docker-explorer](https://github.com/google/docker-explorer) - A tool to help forensicate offline docker acquisitions.
|
||||||
- [KICS](https://github.com/checkmarx/kics) - an infrastructure-as-code scanning tool, find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the development cycle. Can be extended for additional policies. By [Checkmarx](https://github.com/Checkmarx)
|
- [dvwassl](https://github.com/Peco602/dvwassl) :ice_cube: - SSL-enabled Damn Vulnerable Web App to test Web Application Firewalls. By [Peco602][peco602].
|
||||||
- [notary](https://github.com/theupdateframework/notary) - a server and a client for running and interacting with trusted collections. By [TUF](https://github.com/theupdateframework)
|
- [KICS](https://github.com/checkmarx/kics) - An infrastructure-as-code scanning tool, find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the development cycle. Can be extended for additional policies. By [Checkmarx](https://github.com/Checkmarx).
|
||||||
- [oscap-docker](https://github.com/OpenSCAP/openscap) - OpenSCAP provides oscap-docker tool which is used to scan Docker containers and images. By [OpenSCAP](https://github.com/OpenSCAP)
|
- [oscap-docker](https://github.com/OpenSCAP/openscap) - OpenSCAP provides oscap-docker tool which is used to scan Docker containers and images. By [OpenSCAP](https://github.com/OpenSCAP).
|
||||||
- [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud) :heavy_dollar_sign: - (previously Twistlock Security Suite) detects vulnerabilities, hardens container images, and enforces security policies across the lifecycle of applications.
|
- [Prisma Cloud](https://www.paloaltonetworks.com/prisma/cloud) - :yen: (Previously Twistlock Security Suite) detects vulnerabilities, hardens container images, and enforces security policies across the lifecycle of applications.
|
||||||
|
- [segspec](https://github.com/dormstern/segspec) - Extracts network dependencies from Docker Compose, Kubernetes manifests, Helm charts, and other config files to generate Kubernetes NetworkPolicies with evidence tracing. By [dormstern](https://github.com/dormstern).
|
||||||
- [Syft](https://github.com/anchore/syft) - CLI tool and library for generating a Software Bill of Materials (SBOM) from container images and filesystems.
|
- [Syft](https://github.com/anchore/syft) - CLI tool and library for generating a Software Bill of Materials (SBOM) from container images and filesystems.
|
||||||
- [Sysdig Falco](https://github.com/falcosecurity/falco) - Sysdig Falco is an open source container security monitor. It can monitor application, container, host, and network activity and alert on unauthorized activity.
|
- [Sysdig Falco](https://github.com/falcosecurity/falco) - Sysdig Falco is an open source container security monitor. It can monitor application, container, host, and network activity and alert on unauthorized activity.
|
||||||
- [Sysdig Secure](https://www.sysdig.com/solutions/cloud-detection-and-response-cdr) :heavy_dollar_sign: - Sysdig Secure addresses run-time security through behavioral monitoring and defense, and provides deep forensics based on open source Sysdig for incident response.
|
- [Sysdig Secure](https://www.sysdig.com/solutions/cloud-detection-and-response-cdr) - :yen: Sysdig Secure addresses run-time security through behavioral monitoring and defense, and provides deep forensics based on open source Sysdig for incident response.
|
||||||
- [Trend Micro DeepSecurity](https://www.trendmicro.com/en_us/business/products/hybrid-cloud/deep-security.html) :heavy_dollar_sign: - Trend Micro DeepSecurity offers runtime protection for container workloads and hosts as well as preruntime scanning of images to identify vulnerabilities, malware and content such as hardcoded secrets.
|
- [Trend Micro DeepSecurity](https://www.trendmicro.com/en_us/business/products/hybrid-cloud/deep-security.html) - :yen: Trend Micro DeepSecurity offers runtime protection for container workloads and hosts as well as preruntime scanning of images to identify vulnerabilities, malware and content such as hardcoded secrets.
|
||||||
- [Trivy](https://github.com/aquasecurity/trivy) - Aqua Security's open source simple and comprehensive vulnerability scanner for containers (suitable for CI).
|
- [Trivy](https://github.com/aquasecurity/trivy) - Aqua Security's open source simple and comprehensive vulnerability scanner for containers (suitable for CI).
|
||||||
|
|
||||||
### Service Discovery
|
### Service Discovery
|
||||||
|
|
||||||
- [docker-consul](https://github.com/gliderlabs/docker-consul) by [progrium][progrium]
|
- [docker-consul](https://github.com/gliderlabs/docker-consul) by [progrium][progrium]
|
||||||
- [docker-dns](https://github.com/bytesharky/docker-dns) - Lightweight DNS forwarder for Docker containers, resolves container names with custom suffixes (e.g. `.docker`) on the host to simplify service discovery by [bytesharky](https://github.com/bytesharky)
|
- [docker-dns](https://github.com/bytesharky/docker-dns) - Lightweight DNS forwarder for Docker containers, resolves container names with custom suffixes (e.g. `.docker`) on the host to simplify service discovery.
|
||||||
- [etcd](https://github.com/etcd-io/etcd) - Distributed reliable key-value store for the most critical data of a distributed system by [etcd-io](https://github.com/etcd-io) (former part of CoreOS)
|
- [etcd](https://github.com/etcd-io/etcd) - Distributed reliable key-value store for the most critical data of a distributed system by [etcd-io](https://github.com/etcd-io) (former part of CoreOS).
|
||||||
- [istio](https://github.com/istio/istio) - An open platform to connect, manage, and secure microservices by [istio](https://github.com/istio)
|
- [istio](https://github.com/istio/istio) - An open platform to connect, manage, and secure microservices.
|
||||||
- [registrator](https://github.com/gliderlabs/registrator) - Service registry bridge for Docker by [gliderlabs][gliderlabs] and [progrium][progrium]
|
- [registrator](https://github.com/gliderlabs/registrator) - Service registry bridge for Docker by [gliderlabs][gliderlabs] and [progrium][progrium].
|
||||||
|
|
||||||
### Volume Management / Data
|
### Volume Management / Data
|
||||||
|
|
||||||
- [Blockbridge](https://github.com/blockbridge/blockbridge-docker-volume) :heavy_dollar_sign:- The Blockbridge plugin is a volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS. By [blockbridge](https://github.com/blockbridge)
|
- [Blockbridge](https://github.com/blockbridge/blockbridge-docker-volume) :yen:- The Blockbridge plugin is a volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS. By [blockbridge](https://github.com/blockbridge)
|
||||||
- - [Label Backup](https://github.com/resulgg/label-backup) - A lightweight, Docker-aware backup agent that automatically discovers and backs up containerized databases (PostgreSQL, MySQL, MongoDB, Redis) based on Docker labels. Supports local storage and S3-compatible destinations with flexible scheduling via cron expressions.
|
- - [Label Backup](https://github.com/resulgg/label-backup) - A lightweight, Docker-aware backup agent that automatically discovers and backs up containerized databases (PostgreSQL, MySQL, MongoDB, Redis) based on Docker labels. Supports local storage and S3-compatible destinations with flexible scheduling via cron expressions.
|
||||||
- [Docker Volume Backup](https://github.com/offen/docker-volume-backup) Backup Docker volumes locally or to any S3 compatible storage. By [offen](https://github.com/offen)
|
- [Docker Volume Backup](https://github.com/offen/docker-volume-backup) Backup Docker volumes locally or to any S3 compatible storage. By [offen](https://github.com/offen)
|
||||||
- [Minio](https://github.com/minio/minio) - S3 compatible object storage server in Docker containers
|
- [duplicacy-cli-cron](https://github.com/GeiserX/duplicacy-cli-cron) - Docker-based encrypted dual-storage backup automation using Duplicacy CLI with cross-site redundancy and Telegram notifications. By [GeiserX](https://github.com/GeiserX).
|
||||||
- [Netshare](https://github.com/ContainX/docker-volume-netshare) Docker NFS, AWS EFS, Ceph & Samba/CIFS Volume Plugin. By [ContainX][containx]
|
- [Netshare](https://github.com/ContainX/docker-volume-netshare) Docker NFS, AWS EFS, Ceph & Samba/CIFS Volume Plugin. By [ContainX][containx]
|
||||||
- [portworx](https://portworx.com) :heavy_dollar_sign: - Decentralized storage solution for persistent, shared and replicated volumes.
|
- [portworx](https://portworx.com) - :yen: Decentralized storage solution for persistent, shared and replicated volumes.
|
||||||
- [quobyte](https://www.quobyte.com/) :heavy_dollar_sign: - fully fault-tolerant distributed file system with a docker volume driver
|
- [quobyte](https://www.quobyte.com/) - :yen: Fully fault-tolerant distributed file system with a docker volume driver.
|
||||||
- [REX-Ray](https://github.com/rexray/rexray) provides a vendor agnostic storage orchestration engine. The primary design goal is to provide persistent storage for Docker, Kubernetes, and Mesos. By[thecodeteam](https://github.com/thecodeteam) (DELL Technologies)
|
- [REX-Ray](https://github.com/rexray/rexray) provides a vendor agnostic storage orchestration engine. The primary design goal is to provide persistent storage for Docker, Kubernetes, and Mesos. By[thecodeteam](https://github.com/thecodeteam) (DELL Technologies)
|
||||||
- [Storidge](https://github.com/Storidge/quick-start) :heavy_dollar_sign: - Software-defined Persistent Storage for Kubernetes and Docker Swarm
|
|
||||||
|
|
||||||
|
- [Storidge](https://github.com/Storidge/quick-start) :ice_cube: - :yen: Software-defined Persistent Storage for Kubernetes and Docker Swarm.
|
||||||
### User Interface
|
### User Interface
|
||||||
|
|
||||||
#### IDE integrations
|
#### IDE integrations
|
||||||
@@ -323,70 +332,75 @@ _Source:_ [What is Docker](https://www.docker.com/why-docker/)
|
|||||||
|
|
||||||
Native desktop applications for managing and monitoring docker hosts and clusters
|
Native desktop applications for managing and monitoring docker hosts and clusters
|
||||||
|
|
||||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop/) - Official native app. Only for Windows and MacOS
|
- [Docker DB Manager](https://github.com/AbianS/docker-db-manager) - Desktop app for managing Docker database containers with visual interface and one-click operations.
|
||||||
- [Simple Docker UI](https://github.com/felixgborrego/simple-docker-ui) - built on Electron. By [felixgborrego](https://github.com/felixgborrego/)
|
- [Docker Desktop](https://www.docker.com/products/docker-desktop/) - Official native app. Only for Windows and MacOS.
|
||||||
- [Stevedore](https://github.com/slonopotamus/stevedore) - Good Docker Desktop replacement for Windows. Both Linux and Windows Containers are supported. [slonopotamus](https://github.com/slonopotamus)
|
- [Simple Docker UI](https://github.com/felixgborrego/simple-docker-ui) - Built on Electron. By [felixgborrego](https://github.com/felixgborrego/).
|
||||||
|
- [Stevedore](https://github.com/slonopotamus/stevedore) - Good Docker Desktop replacement for Windows. Both Linux and Windows Containers are supported. [slonopotamus](https://github.com/slonopotamus).
|
||||||
|
|
||||||
#### Terminal
|
#### Terminal
|
||||||
|
|
||||||
##### Terminal UI
|
##### Terminal UI
|
||||||
|
- [d4s](https://github.com/jr-k/d4s) - A fast, keyboard-driven terminal UI to manage Docker containers, Compose stacks, and Swarm services with the ergonomics of K9s.
|
||||||
- [dive](https://github.com/wagoodman/dive) - A tool for exploring each layer in a docker image. By [wagoodman](https://github.com/wagoodman).
|
- [dive](https://github.com/wagoodman/dive) - A tool for exploring each layer in a docker image. By [wagoodman](https://github.com/wagoodman).
|
||||||
- [dockdash](https://github.com/byrnedo/dockdash) detailed stats. By [byrnedo]
|
- [dockdash](https://github.com/byrnedo/dockdash) detailed stats. By [byrnedo]
|
||||||
- [dockly](https://github.com/lirantal/dockly) - An interactive shell UI for managing Docker containers by [lirantal](https://github.com/lirantal)
|
- [dockly](https://github.com/lirantal/dockly) - An interactive shell UI for managing Docker containers.
|
||||||
- [DockSTARTer](https://github.com/GhostWriters/DockSTARTer) - DockSTARTer helps you get started with home server apps running in Docker by [GhostWriters](https://github.com/GhostWriters)
|
- [DockMate](https://github.com/shubh-io/dockmate) - Lightweight terminal-based Docker and Podman manager with a text-based user interface,.
|
||||||
- [dry](https://github.com/moncho/dry) - An interactive CLI for Docker containers by [moncho](https://github.com/moncho)
|
- [DockSTARTer](https://github.com/GhostWriters/DockSTARTer) - DockSTARTer helps you get started with home server apps running in Docker by [GhostWriters](https://github.com/GhostWriters).
|
||||||
- [goManageDocker](https://github.com/ajayd-san/gomanagedocker) - TUI tool to view and manage your docker objects blazingly fast with sensible keybindings, also supports VIM navigation out of the box by [ajay-dsan](https://github.com/ajayd-san)
|
- [dprs](https://github.com/durableprogramming/dprs) - A developer-focused TUI for managing Docker containers with real-time log streaming and container management. Built with Rust. By [durableprogramming](https://github.com/durableprogramming).
|
||||||
- [lazydocker](https://github.com/jesseduffield/lazydocker) - The lazier way to manage everything docker. A simple terminal UI for both docker and docker-compose, written in Go with the gocui library. By [jesseduffield](https://github.com/jesseduffield)
|
- [dry](https://github.com/moncho/dry) - An interactive CLI for Docker containers.
|
||||||
- [lazyjournal](https://github.com/Lifailon/lazyjournal) - A interface for reading and filtering the logs output of Docker and Podman containers like [Dozzle](dozzle) but for the terminal with support for fuzzy find, regex and output coloring
|
- [goManageDocker](https://github.com/ajayd-san/gomanagedocker) - TUI tool to view and manage your docker objects blazingly fast with sensible keybindings, also supports VIM navigation out of the box.
|
||||||
- [oxker](https://github.com/mrjackwills/oxker) - A simple tui to view & control docker containers. Written in [Rust](https://rust-lang.org/), making heavy use of [ratatui](https://github.com/tui-rs-revival/ratatui) & [Bollard](https://github.com/fussybeaver/bollard), by [mrjackwills](https://github.com/mrjackwills)
|
- [lazydocker](https://github.com/jesseduffield/lazydocker) - The lazier way to manage everything docker. A simple terminal UI for both docker and docker-compose, written in Go with the gocui library. By [jesseduffield](https://github.com/jesseduffield).
|
||||||
|
- [lazyjournal](https://github.com/Lifailon/lazyjournal) - A interface for reading and filtering the logs output of Docker and Podman containers like [Dozzle](dozzle) but for the terminal with support for fuzzy find, regex and output coloring.
|
||||||
|
- [oxker](https://github.com/mrjackwills/oxker) - A simple tui to view & control docker containers. Written in [Rust](https://rust-lang.org/), making heavy use of [ratatui](https://github.com/tui-rs-revival/ratatui) & [Bollard](https://github.com/fussybeaver/bollard),.
|
||||||
|
|
||||||
##### CLI tools
|
##### CLI tools
|
||||||
|
|
||||||
- [captain](https://github.com/jenssegers/captain) - Easily start and stop docker compose projects from any directory. By [jenssegers](https://github.com/jenssegers)
|
- [captain](https://github.com/jenssegers/captain) :ice_cube: - Easily start and stop docker compose projects from any directory. By [jenssegers](https://github.com/jenssegers).
|
||||||
- [dcinja](https://github.com/Falldog/dcinja) - The powerful and smallest binary size of template engine for docker command line environment. By [Falldog](https://github.com/Falldog)
|
- [dcinja](https://github.com/Falldog/dcinja) - The powerful and smallest binary size of template engine for docker command line environment. By [Falldog](https://github.com/Falldog).
|
||||||
- [dcp](https://github.com/exdx/dcp) - A simple tool for copying files from container filesystems. By [exdx](https://github.com/exdx)
|
- [dcp](https://github.com/exdx/dcp) :ice_cube: - A simple tool for copying files from container filesystems. By [exdx](https://github.com/exdx).
|
||||||
- [dctl](https://github.com/FabienD/docker-stack) - dctl is a Cli tool that helps developers by allowing them to execute all docker compose commands anywhere in the terminal, and more. By [FabienD](https://github.com/FabienD)
|
- [dctl](https://github.com/FabienD/docker-stack) - Dctl is a Cli tool that helps developers by allowing them to execute all docker compose commands anywhere in the terminal, and more. By [FabienD](https://github.com/FabienD).
|
||||||
- [decompose](https://github.com/s0rg/decompose) - Reverse-engineering tool for docker environments. By [s0rg](https://github.com/s0rg)
|
- [decompose](https://github.com/s0rg/decompose) - Reverse-engineering tool for docker environments. By [s0rg](https://github.com/s0rg).
|
||||||
- [docker-ls](https://github.com/mayflower/docker-ls) - CLI tools for browsing and manipulating docker registries by [mayflower](https://github.com/mayflower)
|
- [docker pushrm](https://github.com/christian-korneck/docker-pushrm) - A Docker CLI plugin that lets you push the README.md file from the current directory to Docker Hub. Also supports Quay and Harbor. By [christian-korneck](https://github.com/christian-korneck).
|
||||||
- [docker pushrm](https://github.com/christian-korneck/docker-pushrm) - A Docker CLI plugin that lets you push the README.md file from the current directory to Docker Hub. Also supports Quay and Harbor. By [christian-korneck](https://github.com/christian-korneck)
|
- [docker-captain](https://github.com/lucabello/docker-captain) - A friendly CLI to manage multiple Docker Compose deployments with style — powered by Typer, Rich, questionary, and sh.
|
||||||
- [DVM](https://github.com/howtowhale/dvm) - Docker version manager by [howtowhale](https://github.com/howtowhale)
|
- [DVM](https://github.com/howtowhale/dvm) :ice_cube: - Docker version manager.
|
||||||
- [goinside](https://github.com/iamsoorena/goinside) - Get inside a running docker container easily. by [iamsoorena](https://github.com/iamsoorena)
|
- [goinside](https://github.com/iamsoorena/goinside) :ice_cube: - Get inside a running docker container easily.
|
||||||
- [Pdocker](https://github.com/g31s/Pdocker) - A simple tool to manage and maintain Docker for personal projects by [g31s](https://github.com/g31s)
|
- [Pdocker](https://github.com/g31s/Pdocker) :ice_cube: - A simple tool to manage and maintain Docker for personal projects.
|
||||||
- [proco](https://github.com/shiwaforce/poco) - Proco will help you to organise and manage Docker, Docker-Compose, Kubernetes projects of any complexity using simple YAML config files to shorten the route from finding your project to initialising it in your local environment. by [shiwaforce](https://github.com/shiwaforce)
|
- [proco](https://github.com/shiwaforce/poco) - Proco will help you to organise and manage Docker, Docker-Compose, Kubernetes projects of any complexity using simple YAML config files to shorten the route from finding your project to initialising it in your local environment.
|
||||||
- [scuba](https://github.com/JonathonReinhart/scuba) - Transparently use Docker containers to encapsulate software build environments, by [JonathonReinhart](https://github.com/JonathonReinhart)
|
- [scuba](https://github.com/JonathonReinhart/scuba) - Transparently use Docker containers to encapsulate software build environments,.
|
||||||
- [skopeo](https://github.com/containers/skopeo) - Work with remote images registries - retrieving information, images, signing content by [containers][containers]
|
- [skopeo](https://github.com/containers/skopeo) - Work with remote images registries - retrieving information, images, signing content.
|
||||||
- [supdock](https://github.com/segersniels/supdock) - Allows for slightly more visual usage of Docker with an interactive prompt. By [segersniels](https://github.com/segersniels)
|
- [supdock](https://github.com/segersniels/supdock) - Allows for slightly more visual usage of Docker with an interactive prompt. By [segersniels](https://github.com/segersniels).
|
||||||
- [tsaotun](https://github.com/qazbnm456/tsaotun) - Python based Assistance for Docker by [qazbnm456](https://github.com/qazbnm456)
|
|
||||||
|
|
||||||
|
- [tsaotun](https://github.com/qazbnm456/tsaotun) :ice_cube: - Python based Assistance for Docker.
|
||||||
##### Other
|
##### Other
|
||||||
|
|
||||||
- [dext-docker-registry-plugin](https://github.com/vutran/dext-docker-registry-plugin) - Search the Docker Registry with the Dext smart launcher. By [vutran](https://github.com/vutran)
|
- [dext-docker-registry-plugin](https://github.com/vutran/dext-docker-registry-plugin) :ice_cube: - Search the Docker Registry with the Dext smart launcher. By [vutran](https://github.com/vutran).
|
||||||
- [docker-ssh](https://github.com/jeroenpeeters/docker-ssh) - SSH Server for Docker containers ~ Because every container should be accessible. By [jeroenpeeters](https://github.com/jeroenpeeters)
|
- [docker-ssh](https://github.com/jeroenpeeters/docker-ssh) :ice_cube: - SSH Server for Docker containers ~ Because every container should be accessible. By [jeroenpeeters](https://github.com/jeroenpeeters).
|
||||||
- [dockerfile-mode](https://github.com/spotify/dockerfile-mode) An emacs mode for handling Dockerfiles by [spotify][spotify]
|
- [dockerfile-mode](https://github.com/spotify/dockerfile-mode) An emacs mode for handling Dockerfiles by [spotify][spotify]
|
||||||
- [MultiDocker](https://github.com/marty90/multidocker) - Create a secure multi-user Docker machine, where each user is segregated into an indepentent container.
|
|
||||||
- [Powerline-Docker](https://github.com/adrianmo/powerline-docker) - A Powerline segment for showing the status of Docker containers by [adrianmo](https://github.com/adrianmo)
|
|
||||||
|
|
||||||
|
- [MultiDocker](https://github.com/marty90/multidocker) :ice_cube: - Create a secure multi-user Docker machine, where each user is segregated into an indepentent container.
|
||||||
|
- [Powerline-Docker](https://github.com/adrianmo/powerline-docker) :ice_cube: - A Powerline segment for showing the status of Docker containers.
|
||||||
#### Web
|
#### Web
|
||||||
|
|
||||||
- [CASA](https://github.com/knrdl/casa) - Outsource the administration of a handful of containers to your co-workers, by [knrdl](https://github.com/knrdl)
|
- [Arcane](https://github.com/getarcaneapp/arcane) - An easy and modern Docker management platform, built with everybody in mind. By [getarcaneapp](https://github.com/getarcaneapp).
|
||||||
- [Container Web TTY](https://github.com/wrfly/container-web-tty) - Connect your containers via a web-tty [wrfly](https://github.com/wrfly)
|
- [CASA](https://github.com/knrdl/casa) - Outsource the administration of a handful of containers to your co-workers,.
|
||||||
- [dockemon](https://github.com/ProductiveOps/dokemon) - Docker Container Management GUI by [productiveops](https://github.com/ProductiveOps)
|
- [Container Web TTY](https://github.com/wrfly/container-web-tty) - Connect your containers via a web-tty [wrfly](https://github.com/wrfly).
|
||||||
- [Docker Registry Browser](https://github.com/klausmeyer/docker-registry-browser) - Web Interface for the Docker Registry HTTP API v2 by [klausmeyer](https://github.com/klausmeyer)
|
- [dockemon](https://github.com/ProductiveOps/dokemon) :ice_cube: - Docker Container Management GUI.
|
||||||
- [docker-registry-web](https://github.com/mkuchin/docker-registry-web) - Web UI, authentication service and event recorder for private docker registry v2 by [mkuchin](https://github.com/mkuchin)
|
- [Docker Registry Browser](https://github.com/klausmeyer/docker-registry-browser) - Web Interface for the Docker Registry HTTP API v2.
|
||||||
|
- [docker-registry-web](https://github.com/mkuchin/docker-registry-web) :ice_cube: - Web UI, authentication service and event recorder for private docker registry v2.
|
||||||
- [docker-swarm-visualizer](https://github.com/dockersamples/docker-swarm-visualizer) - Visualizes Docker services on a Docker Swarm (for running demos).
|
- [docker-swarm-visualizer](https://github.com/dockersamples/docker-swarm-visualizer) - Visualizes Docker services on a Docker Swarm (for running demos).
|
||||||
- [dockge](https://github.com/louislam/dockge) - easy-to-use and reactive self-hosted docker compose.yaml stack-oriented manager by [louislam](https://github.com/louislam).
|
- [dockge](https://github.com/louislam/dockge) - Easy-to-use and reactive self-hosted docker compose.yaml stack-oriented manager.
|
||||||
- [Komodo](https://github.com/mbecker20/komodo) - A tool to build and deploy software on many servers
|
- [Komodo](https://github.com/mbecker20/komodo) - A tool to build and deploy software on many servers.
|
||||||
- [Kubevious](https://github.com/kubevious/kubevious) - A highly visual web UI for Kubernetes which renders configuration and state in an application centric way by [rubenhak](https://github.com/rubenhak).
|
- [Kubevious](https://github.com/kubevious/kubevious) :ice_cube: - A highly visual web UI for Kubernetes which renders configuration and state in an application centric way.
|
||||||
- [Mafl](https://github.com/hywax/mafl) - Minimalistic flexible homepage by [hywax](https://github.com/hywax/)
|
- [Mafl](https://github.com/hywax/mafl) - Minimalistic flexible homepage.
|
||||||
- [netdata](https://github.com/netdata/netdata) - Real-time performance monitoring
|
- [netdata](https://github.com/netdata/netdata) - Real-time performance monitoring.
|
||||||
- [OctoLinker](https://github.com/OctoLinker/OctoLinker) - A browser extension for GitHub that makes the image name in a `Dockerfile` clickable and redirect you to the related Docker Hub page.
|
- [OctoLinker](https://github.com/OctoLinker/OctoLinker) :ice_cube: - A browser extension for GitHub that makes the image name in a `Dockerfile` clickable and redirect you to the related Docker Hub page.
|
||||||
- [Portainer](https://github.com/portainer/portainer) - A lightweight management UI for managing your Docker hosts or Docker Swarm clusters by [portainer](https://github.com/portainer)
|
- [Portainer](https://github.com/portainer/portainer) - A lightweight management UI for managing your Docker hosts or Docker Swarm clusters.
|
||||||
- [Rapid Dashboard](https://github.com/ozlerhakan/rapid) - A simple query dashboard to use Docker Remote API by [ozlerhakan](https://github.com/ozlerhakan/)
|
- [Rapid Dashboard](https://github.com/ozlerhakan/rapid) :ice_cube: - A simple query dashboard to use Docker Remote API.
|
||||||
- [Seagull](https://github.com/tobegit3hub/seagull) - Friendly Web UI to monitor docker daemon. by [tobegit3hub](https://github.com/tobegit3hub)
|
- [Seagull](https://github.com/tobegit3hub/seagull) :ice_cube: - Friendly Web UI to monitor docker daemon.
|
||||||
- [Swarmpit](https://github.com/swarmpit/swarmpit) - Swarmpit provides simple and easy to use interface for your Docker Swarm cluster. You can manage your stacks, services, secrets, volumes, networks etc.
|
- [Swarmpit](https://github.com/swarmpit/swarmpit) - Swarmpit provides simple and easy to use interface for your Docker Swarm cluster. You can manage your stacks, services, secrets, volumes, networks etc.
|
||||||
- [Swirl](https://github.com/cuigh/swirl) - Swirl is a web management tool for Docker, focused on swarm cluster By [cuigh](https://github.com/cuigh/)
|
- [Swirl](https://github.com/cuigh/swirl) :ice_cube: - Swirl is a web management tool for Docker, focused on swarm cluster By [cuigh](https://github.com/cuigh/).
|
||||||
- [Theia](https://github.com/eclipse-theia/theia) - Extensible platform to develop full-fledged multi-language Cloud & Desktop IDE-like products with state-of-the-art web technologies.
|
- [Theia](https://github.com/eclipse-theia/theia) - Extensible platform to develop full-fledged multi-language Cloud & Desktop IDE-like products with state-of-the-art web technologies.
|
||||||
- [Yacht](https://github.com/SelfhostedPro/Yacht) :construction: - A Web UI for docker that focuses on templates and ease of use in order to make deployments as easy as possible. By [SelfhostedPro](https://github.com/SelfhostedPro)
|
- [usulnet](https://github.com/fr4nsys/usulnet) - A complete and modern Docker management platform designed for sysadmin, devops with enterprise grade tools, cve scanner, ssh, rdp on web and much more. By [fr4nsys](https://github.com/fr4nsys).
|
||||||
|
|
||||||
## Docker Images
|
## Docker Images
|
||||||
|
|
||||||
@@ -394,62 +408,63 @@ Native desktop applications for managing and monitoring docker hosts and cluster
|
|||||||
|
|
||||||
Tools and applications that are either installed inside containers or designed to be run as a [sidecar](https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar)
|
Tools and applications that are either installed inside containers or designed to be run as a [sidecar](https://learn.microsoft.com/en-us/azure/architecture/patterns/sidecar)
|
||||||
|
|
||||||
- [amicontained](https://github.com/genuinetools/amicontained) - Container introspection tool. Find out what container runtime is being used as well as features available by [genuinetools][genuinetools]
|
- [amicontained](https://github.com/genuinetools/amicontained) :ice_cube: - Container introspection tool. Find out what container runtime is being used as well as features available.
|
||||||
- [Chaperone](https://github.com/garywiz/chaperone) - A single PID1 process designed for docker containers. Does user management, log management, startup, zombie reaping, all in one small package. by [garywiz](https://github.com/garywiz)
|
- [Chaperone](https://github.com/garywiz/chaperone) :ice_cube: - A single PID1 process designed for docker containers. Does user management, log management, startup, zombie reaping, all in one small package.
|
||||||
- [ckron](https://github.com/nicomt/ckron) - A cron-style job scheduler for docker, by [nicomt](https://github.com/nicomt)
|
- [ckron](https://github.com/nicomt/ckron) - A cron-style job scheduler for docker,.
|
||||||
- [CoreOS][coreos] - Linux for Massive Server Deployments
|
- [CoreOS][coreos] - Linux for Massive Server Deployments
|
||||||
- [distroless](https://github.com/GoogleContainerTools/distroless) - Language focused docker images, minus the operating system, by [GoogleContainerTools][googlecontainertools]
|
- [distroless](https://github.com/GoogleContainerTools/distroless) - Language focused docker images, minus the operating system,.
|
||||||
- [docker-alpine](https://github.com/gliderlabs/docker-alpine) - A super small Docker base image _(5MB)_ using Alpine Linux by [gliderlabs][gliderlabs]
|
- [docker-alpine](https://github.com/gliderlabs/docker-alpine) :ice_cube: - A super small Docker base image _(5MB)_ using Alpine Linux.
|
||||||
- [docker-gen](https://github.com/jwilder/docker-gen) - Generate files from docker container meta-data by [jwilder][jwilder]
|
- [docker-gen](https://github.com/jwilder/docker-gen) - Generate files from docker container meta-data.
|
||||||
- [dockerize](https://github.com/powerman/dockerize) - Utility to simplify running applications in docker containers by [jwilder][jwilder], [powerman][powerman]
|
- [dockerize](https://github.com/powerman/dockerize) - Utility to simplify running applications in docker containers by [jwilder][jwilder], [powerman][powerman].
|
||||||
- [GoSu](https://github.com/tianon/gosu) - Run this specific application as this specific user and get out of the pipeline (entrypoint script tool) by [tianon](https://github.com/tianon)
|
- [GoSu](https://github.com/tianon/gosu) - Run this specific application as this specific user and get out of the pipeline (entrypoint script tool).
|
||||||
- [is-docker](https://github.com/sindresorhus/is-docker) - Check if the process is running inside a Docker container by [sindresorhus][sindresorhus]
|
- [is-docker](https://github.com/sindresorhus/is-docker) - Check if the process is running inside a Docker container.
|
||||||
- [lstags](https://github.com/ivanilves/lstags) - sync Docker images across registries by [ivanilves](https://github.com/ivanilves)
|
- [lstags](https://github.com/ivanilves/lstags) :ice_cube: - Sync Docker images across registries.
|
||||||
|
- [microcheck](https://github.com/tarampampam/microcheck) - Lightweight health check utilities for Docker containers (75 KB instead of 9.3 MB for httpcheck versus cURL) in pure C - http(s), port checks, and parallel execution are included.
|
||||||
- [Ofelia](https://github.com/mcuadros/ofelia/) - Ofelia is a modern and low footprint job scheduler for docker environments, built on Go. Ofelia aims to be a replacement for the old fashioned cron. Supports configuration from container labels and/or configuration files.
|
- [Ofelia](https://github.com/mcuadros/ofelia/) - Ofelia is a modern and low footprint job scheduler for docker environments, built on Go. Ofelia aims to be a replacement for the old fashioned cron. Supports configuration from container labels and/or configuration files.
|
||||||
- [SparkView](https://github.com/beyondssl/sparkview-container) - Access VMs, desktops, servers or applications anytime and from anywhere, without complex and costly client roll-outs or user management.
|
- [SparkView](https://github.com/beyondssl/sparkview-container) - Access VMs, desktops, servers or applications anytime and from anywhere, without complex and costly client roll-outs or user management.
|
||||||
- [su-exec](https://github.com/ncopa/su-exec) - This is a simple tool that will simply execute a program with different privileges. The program will be executed directly and not run as a child, like su and sudo does, which avoids TTY and signal issues. Why reinvent gosu? This does more or less exactly the same thing as gosu but it is only 10kb instead of 1.8MB. By [ncopa](https://github.com/ncopa)
|
- [su-exec](https://github.com/ncopa/su-exec) - This is a simple tool that will simply execute a program with different privileges. The program will be executed directly and not run as a child, like su and sudo does, which avoids TTY and signal issues. Why reinvent gosu? This does more or less exactly the same thing as gosu but it is only 10kb instead of 1.8MB. By [ncopa](https://github.com/ncopa).
|
||||||
- [sue](https://github.com/theAkito/sue) - Executes a program as a user different from the user running sue. This is a maintainable alternative to ncopa/su-exec, which is the better tianon/gosu. This one is far better (higher performance, smaller size), than the original gosu, however it is far easier to maintain, than su-exec, which is written in plain C. Made by [Akito][akito]
|
- [sue](https://github.com/theAkito/sue) :ice_cube: - Executes a program as a user different from the user running sue. This is a maintainable alternative to ncopa/su-exec, which is the better tianon/gosu. This one is far better (higher performance, smaller size), than the original gosu, however it is far easier to maintain, than su-exec, which is written in plain C. Made by [Akito][akito].
|
||||||
- [supercronic](https://github.com/aptible/supercronic) - crontab-compatible job runner, designed specifically to run in containers by [aptible](https://github.com/aptible/)
|
- [supercronic](https://github.com/aptible/supercronic) - Crontab-compatible job runner, designed specifically to run in containers.
|
||||||
- [TrivialRC](https://github.com/vorakl/TrivialRC) - A minimalistic Runtime Configuration system and process manager for containers [vorakl](https://github.com/vorakl)
|
|
||||||
|
|
||||||
|
- [TrivialRC](https://github.com/vorakl/TrivialRC) :ice_cube: - A minimalistic Runtime Configuration system and process manager for containers [vorakl](https://github.com/vorakl).
|
||||||
### Builder
|
### Builder
|
||||||
|
|
||||||
Applications designed to help or simplify building **new** images
|
Applications designed to help or simplify building **new** images
|
||||||
|
|
||||||
- [ansible-bender](https://github.com/ansible-community/ansible-bender) - A tool utilising `ansible` and `buildah` by [TomasTomecek][tomastomecek]
|
- [ansible-bender](https://github.com/ansible-community/ansible-bender) - A tool utilising `ansible` and `buildah`.
|
||||||
- [buildah](https://github.com/containers/buildah) - A tool that facilitates building OCI images by [containers][containers]
|
- [buildah](https://github.com/containers/buildah) - A tool that facilitates building OCI images.
|
||||||
- [BuildKit](https://github.com/moby/buildkit) - Concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit by [moby project](https://github.com/moby)
|
- [BuildKit](https://github.com/moby/buildkit) - Concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit.
|
||||||
- [cekit](https://github.com/cekit/cekit) - A tool used by openshift to build base images using different build engines by [cekit](https://github.com/cekit).
|
- [cekit](https://github.com/cekit/cekit) - A tool used by openshift to build base images using different build engines.
|
||||||
- [container-factory](https://github.com/mutable/container-factory) - Produces Docker images from tarballs of application source code by [mutable](https://github.com/mutable)
|
- [container-factory](https://github.com/mutable/container-factory) :ice_cube: - Produces Docker images from tarballs of application source code.
|
||||||
- [copy-docker-image](https://github.com/mdlavin/copy-docker-image) - Copy a Docker image between registries without a full Docker installation by [mdlavin](https://github.com/mdlavin)
|
- [copy-docker-image](https://github.com/mdlavin/copy-docker-image) :ice_cube: - Copy a Docker image between registries without a full Docker installation.
|
||||||
- [Derrick](https://github.com/alibaba/derrick) - A tool help you to automate the generation of Dockerfile and dockerize application by scanning the code. By [alibaba](https://github.com/alibaba).
|
- [Derrick](https://github.com/alibaba/derrick) :ice_cube: - A tool help you to automate the generation of Dockerfile and dockerize application by scanning the code. By [alibaba](https://github.com/alibaba).
|
||||||
- [dlayer](https://github.com/orisano/dlayer) - docker layer analyzer by [orisano](https://github.com/orisano)
|
- [dlayer](https://github.com/orisano/dlayer) - Docker layer analyzer.
|
||||||
- [docker-companion](https://github.com/mudler/docker-companion) - A command line tool written in Golang to squash and unpack docker images by [mudler](https://github.com/mudler/)
|
- [docker-companion](https://github.com/mudler/docker-companion) - A command line tool written in Golang to squash and unpack docker images.
|
||||||
- [docker-make](https://github.com/CtripCloud/docker-make) - Build, tag,and push a bunch of related docker images via a single command.
|
- [docker-make](https://github.com/CtripCloud/docker-make) :ice_cube: - Build, tag,and push a bunch of related docker images via a single command.
|
||||||
- [docker-replay](https://github.com/bcicen/docker-replay) - Generate `docker run`command and options from running containers. By [bcicen](https://github.com/bcicen)
|
- [docker-repack](https://github.com/orf/docker-repack) - Repacks a Docker image into a smaller, more efficient version that makes it significantly faster to pull. By [orf](https://github.com/orf).
|
||||||
- [docker-repack](https://github.com/orf/docker-repack) - Repacks a Docker image into a smaller, more efficient version that makes it significantly faster to pull. By [orf](https://github.com/orf)
|
- [docker-replay](https://github.com/bcicen/docker-replay) :ice_cube: - Generate `docker run`command and options from running containers. By [bcicen](https://github.com/bcicen).
|
||||||
- [DockerSlim](https://github.com/docker-slim/docker-slim) shrinks fat Docker images creating the smallest possible images.
|
- [DockerSlim](https://github.com/docker-slim/docker-slim) shrinks fat Docker images creating the smallest possible images.
|
||||||
- [Dockly](https://github.com/swipely/dockly) - Dockly is a gem made to ease the pain of packaging an application in Docker by [swipely](https://github.com/swipely/)
|
- [Dockly](https://github.com/swipely/dockly) :ice_cube: - Dockly is a gem made to ease the pain of packaging an application in Docker.
|
||||||
- [essex](https://github.com/utensils/essex) - Boilerplate for Docker Based Projects: Essex is a CLI utility written in bash to quickly setup clean and consistent Docker projects with Makefile driven workflows. [jamesbrink](https://github.com/jamesbrink)
|
- [essex](https://github.com/utensils/essex) - Boilerplate for Docker Based Projects: Essex is a CLI utility written in bash to quickly setup clean and consistent Docker projects with Makefile driven workflows. [jamesbrink](https://github.com/jamesbrink).
|
||||||
- [HPC Container Maker](https://github.com/NVIDIA/hpc-container-maker) - Generates Dockerfiles from a high level Python recipe, including building blocks for High-Performance Computing components by [NVIDIA][nvidia]
|
- [HPC Container Maker](https://github.com/NVIDIA/hpc-container-maker) - Generates Dockerfiles from a high level Python recipe, including building blocks for High-Performance Computing components.
|
||||||
- [img](https://github.com/genuinetools/img) - Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder by [genuinetools][genuinetools]
|
- [img](https://github.com/genuinetools/img) - Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.
|
||||||
- [packer](https://developer.hashicorp.com/packer/integrations/hashicorp/docker/latest/components/builder/docker) - Hashicorp tool to build machine images including docker image integrated with configuration management tools like chef, puppet, ansible
|
- [packer](https://developer.hashicorp.com/packer/integrations/hashicorp/docker/latest/components/builder/docker) - Hashicorp tool to build machine images including docker image integrated with configuration management tools like chef, puppet, ansible.
|
||||||
- [portainer](https://github.com/duedil-ltd/portainer) - Apache Mesos framework for building Docker images by [duedil-ltd](https://github.com/duedil-ltd)
|
- [portainer](https://github.com/duedil-ltd/portainer) :ice_cube: - Apache Mesos framework for building Docker images.
|
||||||
- [Production-Ready Python Containers :heavy_dollar_sign:](https://pythonspeed.com/products/pythoncontainer/) - A template for creating production-ready Docker images for Python applications.
|
- [Production-Ready Python Containers :yen:](https://pythonspeed.com/products/pythoncontainer/) - A template for creating production-ready Docker images for Python applications.
|
||||||
- [RAUDI](https://github.com/cybersecsi/RAUDI) - A tool to automatically update (and optionally push to Docker Hub) Docker Images for 3rd party software whenever theres is a new release/update/commit. By [SecSI](https://github.com/cybersecsi)
|
- [RAUDI](https://github.com/cybersecsi/RAUDI) - A tool to automatically update (and optionally push to Docker Hub) Docker Images for 3rd party software whenever theres is a new release/update/commit. By [SecSI](https://github.com/cybersecsi).
|
||||||
- [runlike](https://github.com/lavie/runlike) - Generate `docker run`command and options from running containers by [lavie](https://github.com/lavie)
|
- [runlike](https://github.com/lavie/runlike) - Generate `docker run`command and options from running containers.
|
||||||
- [userdef](https://github.com/theAkito/userdef) - An advanced `adduser` for your Alpine based Docker images. Made by [Akito][akito]
|
- [userdef](https://github.com/theAkito/userdef) :ice_cube: - An advanced `adduser` for your Alpine based Docker images. Made by [Akito][akito].
|
||||||
- [Whaler](https://github.com/P3GLEG/Whaler) - Program to reverse Docker images into Dockerfiles by [P3GLEG](https://github.com/P3GLEG/).
|
- [Whaler](https://github.com/P3GLEG/Whaler) - Program to reverse Docker images into Dockerfiles.
|
||||||
- [Whales](https://github.com/Gueils/whales) - A tool to automatically dockerize your applications by [icalialabs](https://github.com/IcaliaLabs).
|
|
||||||
|
|
||||||
|
- [Whales](https://github.com/Gueils/whales) :ice_cube: - A tool to automatically dockerize your applications.
|
||||||
### Dockerfile
|
### Dockerfile
|
||||||
|
|
||||||
- [chaperone-docker](https://github.com/garywiz/chaperone-docker) - A set of images using the Chaperone process manager, including a lean Alpine image, LAMP, LEMP, and bare-bones base kits.
|
- [chaperone-docker](https://github.com/garywiz/chaperone-docker) :ice_cube: - A set of images using the Chaperone process manager, including a lean Alpine image, LAMP, LEMP, and bare-bones base kits.
|
||||||
- [Dockerfile Generator](https://github.com/ozankasikci/dockerfile-generator) `dfg` is both a Go library and an executable that produces valid Dockerfiles using various input channels.
|
- [Dockerfile Generator](https://github.com/ozankasikci/dockerfile-generator) `dfg` is both a Go library and an executable that produces valid Dockerfiles using various input channels.
|
||||||
- [Dockerfile Project](https://dockerfile.github.io/) - Trusted Automated Docker Builds. Dockerfile Project maintains a central repository of Dockerfile for various popular open source software services runnable on a Docker container.
|
- [Dockerfile Project](https://dockerfile.github.io/) - Trusted Automated Docker Builds. Dockerfile Project maintains a central repository of Dockerfile for various popular open source software services runnable on a Docker container.
|
||||||
- [dockerfilegraph](https://github.com/patrickhoefler/dockerfilegraph) - Visualize your multi-stage Dockerfiles. By [PatrickHoefler](https://github.com/patrickhoefler)
|
- [dockerfilegraph](https://github.com/patrickhoefler/dockerfilegraph) - Visualize your multi-stage Dockerfiles. By [PatrickHoefler](https://github.com/patrickhoefler).
|
||||||
- [Dockershelf](https://github.com/Dockershelf/dockershelf) - A repository that serves as a collector for docker recipes that are universal, efficient and slim. Images are updated, tested and published daily via a Travis cron job. Maintained by [CollageLabs](https://github.com/CollageLabs).
|
- [Dockershelf](https://github.com/Dockershelf/dockershelf) - A repository that serves as a collector for docker recipes that are universal, efficient and slim. Images are updated, tested and published daily via a Travis cron job.
|
||||||
- [Vektorcloud](https://github.com/vektorcloud) - A collection of minimal, Alpine-based Docker images
|
- [Vektorcloud](https://github.com/vektorcloud) - A collection of minimal, Alpine-based Docker images.
|
||||||
|
|
||||||
Examples by:
|
Examples by:
|
||||||
|
|
||||||
@@ -465,73 +480,74 @@ Examples by:
|
|||||||
|
|
||||||
### Linter
|
### Linter
|
||||||
|
|
||||||
|
- [Dockadvisor](https://github.com/deckrun/dockadvisor) - Lightweight Dockerfile linter with 60+ rules, quality scoring, and security checks.
|
||||||
- [docker-image-size-limit](https://github.com/wemake-services/docker-image-size-limit) - A tool to keep an eye on your docker images size.
|
- [docker-image-size-limit](https://github.com/wemake-services/docker-image-size-limit) - A tool to keep an eye on your docker images size.
|
||||||
- [Dockerfile Linter action](https://github.com/buddy-works/dockerfile-linter) - The linter lets you verify Dockerfile syntax to make sure it follows the best practices for building efficient Docker images.
|
- [Dockerfile Linter action](https://github.com/buddy-works/dockerfile-linter) :ice_cube: - The linter lets you verify Dockerfile syntax to make sure it follows the best practices for building efficient Docker images.
|
||||||
- [dockfmt](https://github.com/jessfraz/dockfmt) :construction: - Dockerfile formatter and parser by [jessfraz][jessfraz]
|
- [FROM:latest](https://github.com/replicatedhq/dockerfilelint) :ice_cube: - An opinionated Dockerfile linter.
|
||||||
- [FROM:latest](https://github.com/replicatedhq/dockerfilelint) - An opinionated Dockerfile linter by [replicatedhq](https://github.com/replicatedhq)
|
- [Hadolint](https://github.com/hadolint/hadolint) - A Dockerfile linter that checks for best practices, common mistakes, and is also able to lint any bash written in `RUN` instructions;.
|
||||||
- [Hadolint](https://github.com/hadolint/hadolint) - A Dockerfile linter that checks for best practices, common mistakes, and is also able to lint any bash written in `RUN` instructions; by [lukasmartinelli](https://github.com/lukasmartinelli)
|
|
||||||
|
|
||||||
### Metadata
|
### Metadata
|
||||||
|
|
||||||
- [opencontainer](https://github.com/opencontainers/image-spec/blob/master/annotations.md) - A convention and shared namespace for Docker labels defined by OCI Image Spec.
|
- [opencontainer](https://github.com/opencontainers/image-spec/blob/main/annotations.md) - A convention and shared namespace for Docker labels defined by OCI Image Spec.
|
||||||
|
|
||||||
### Registry
|
### Registry
|
||||||
|
|
||||||
Services to securely store your Docker images.
|
Services to securely store your Docker images.
|
||||||
|
|
||||||
- [Amazon Elastic Container Registry :heavy_dollar_sign:](https://aws.amazon.com/ecr/) - Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
|
- [Amazon Elastic Container Registry :yen:](https://aws.amazon.com/ecr/) - Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
|
||||||
- [Azure Container Registry :heavy_dollar_sign:](https://azure.microsoft.com/en-us/products/container-registry/#overview) - Manage a Docker private registry as a first-class Azure resource
|
- [Azure Container Registry :yen:](https://azure.microsoft.com/en-us/products/container-registry/#overview) - Manage a Docker private registry as a first-class Azure resource.
|
||||||
- [CargoOS](https://github.com/RedCoolBeans/cargos-buildroot) - A bare essential OS for running the Docker Engine on bare metal or Cloud. By [RedCoolBeans](https://github.com/RedCoolBeans)
|
- [CargoOS](https://github.com/RedCoolBeans/cargos-buildroot) :ice_cube: - A bare essential OS for running the Docker Engine on bare metal or Cloud. By [RedCoolBeans](https://github.com/RedCoolBeans).
|
||||||
- [cleanreg](https://github.com/hcguersoy/cleanreg) - A small tool to delete image manifests from a Docker Registry implementing the API v2, dereferencing them for the GC by [hcguersoy](https://github.com/hcguersoy)
|
- [cleanreg](https://github.com/hcguersoy/cleanreg) :ice_cube: - A small tool to delete image manifests from a Docker Registry implementing the API v2, dereferencing them for the GC.
|
||||||
- [Cloudsmith :heavy_dollar_sign:](https://cloudsmith.com/product/formats/docker-registry) - A fully managed package management SaaS, with first-class support for public and private Docker registries (and many others, incl. Helm charts for the Kubernetes ecosystem). Has a generous free-tier and is also completely free for open-source.
|
- [Cloudsmith :yen:](https://cloudsmith.com/product/formats/docker-registry) - A fully managed package management SaaS, with first-class support for public and private Docker registries (and many others, incl. Helm charts for the Kubernetes ecosystem). Has a generous free-tier and is also completely free for open-source.
|
||||||
- [Container Registry Service :heavy_dollar_sign:](https://container-registry.com/) - Harbor based Container Management Solution as a Service for teams and organizations. Free tier offers 1 GB storage for private repositories.
|
- [Container Registry Service :yen:](https://container-registry.com/) - Harbor based Container Management Solution as a Service for teams and organizations. Free tier offers 1 GB storage for private repositories.
|
||||||
- [Cycle.io :heavy_dollar_sign:](https://cycle.io/) - Bare-metal container hosting.
|
- [Cycle.io :yen:](https://cycle.io/) - Bare-metal container hosting.
|
||||||
- [DigitalOcean :heavy_dollar_sign:](https://www.digitalocean.com/products/container-registry) - DigitalOcean Container Registry.
|
- [DigitalOcean :yen:](https://www.digitalocean.com/products/container-registry) - DigitalOcean Container Registry.
|
||||||
- [Docker Hub](https://hub.docker.com/) provided by Docker Inc.
|
- [Docker Hub](https://hub.docker.com/) provided by Docker Inc.
|
||||||
- [Docker Registry v2][distribution] - The Docker toolset to pack, ship, store, and deliver content
|
- [Docker Registry v2][distribution] - The Docker toolset to pack, ship, store, and deliver content
|
||||||
- [Docket](https://github.com/netvarun/docket) - Custom docker registry that allows for lightning fast deploys through bittorrent by [netvarun](https://github.com/netvarun/)
|
- [Docket](https://github.com/netvarun/docket) :ice_cube: - Custom docker registry that allows for lightning fast deploys through bittorrent.
|
||||||
- [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) - Provide efficient, stable and secure file distribution and image acceleration based on p2p technology.
|
- [Dragonfly](https://github.com/dragonflyoss/Dragonfly2) - Provide efficient, stable and secure file distribution and image acceleration based on p2p technology.
|
||||||
- [GCP Artifact Registry :heavy_dollar_sign:](https://cloud.google.com/artifact-registry/docs) Fast, private Docker image storage on Google Cloud Platform.
|
- [GCP Artifact Registry :yen:](https://cloud.google.com/artifact-registry/docs) Fast, private Docker image storage on Google Cloud Platform.
|
||||||
- [Gitea Container Registry](https://docs.gitea.com/usage/packages/container) - Integrated Docker registry in Gitea, ideal for private, small-scale image hosting.
|
- [Gitea Container Registry](https://docs.gitea.com/usage/packages/container) - Integrated Docker registry in Gitea, ideal for private, small-scale image hosting.
|
||||||
- [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) - GitHub's solution for storing and managing Docker images, with tight integration into GitHub Actions.
|
- [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) - GitHub's solution for storing and managing Docker images, with tight integration into GitHub Actions.
|
||||||
- [GitLab Container Registry](https://docs.gitlab.com/user/packages/container_registry/) - Registry focused on using its images in GitLab CI
|
- [GitLab Container Registry](https://docs.gitlab.com/user/packages/container_registry/) - Registry focused on using its images in GitLab CI.
|
||||||
- [Harbor](https://github.com/goharbor/harbor) An open source trusted cloud native registry project that stores, signs, and scans content. Supports replication, user management, access control and activity auditing. By [CNCF](https://www.cncf.io) formerly [VMWare][vmware]
|
- [Harbor](https://github.com/goharbor/harbor) An open source trusted cloud native registry project that stores, signs, and scans content. Supports replication, user management, access control and activity auditing. By [CNCF](https://www.cncf.io) formerly [VMWare][vmware]
|
||||||
- [JFrog Artifactory :heavy_dollar_sign:](https://jfrog.com/artifactory/) - Artifact Repository Manager, can be used as private Docker Registry as well
|
- [JFrog Artifactory :yen:](https://jfrog.com/artifactory/) - Artifact Repository Manager, can be used as private Docker Registry as well.
|
||||||
- [Kraken](https://github.com/uber/kraken) - Uber's Highly scalable P2P docker registry, capable of distributing TBs of data in seconds.
|
- [Kraken](https://github.com/uber/kraken) - Uber's Highly scalable P2P docker registry, capable of distributing TBs of data in seconds.
|
||||||
- [Quay.io :heavy_dollar_sign:](https://quay.io/) (part of CoreOS) - Secure hosting for private Docker repositories
|
- [nscr](https://github.com/jhstatewide/nscr) - A light-weight, self-contained container registry that's easy to run and maintain.
|
||||||
- [Registryo](https://github.com/inmagik/registryo) - UI and token based authentication server for onpremise docker registry
|
- [Quay.io :yen:](https://quay.io/) - Secure hosting for private Docker repositories.
|
||||||
|
- [Registryo](https://github.com/inmagik/registryo) - UI and token based authentication server for onpremise docker registry.
|
||||||
- [RepoFlow](https://www.repoflow.io) - A simple and easy-to-use package management platform with Docker support alongside other formats like PyPI, Maven, npm, and Helm. Includes smart search, built-in Docker image scanning, and a great free option for both self-hosted and cloud use.
|
- [RepoFlow](https://www.repoflow.io) - A simple and easy-to-use package management platform with Docker support alongside other formats like PyPI, Maven, npm, and Helm. Includes smart search, built-in Docker image scanning, and a great free option for both self-hosted and cloud use.
|
||||||
- [Rescoyl](https://github.com/noteed/rescoyl) - Private Docker registry (free and open source) by [noteed](https://github.com/noteed)
|
- [Rescoyl](https://github.com/noteed/rescoyl) :ice_cube: - Private Docker registry (free and open source).
|
||||||
- [Sonatype Nexus Repository](https://www.sonatype.com/products/sonatype-nexus-repository) - Manage binaries and build artifacts across your software supply chain.
|
- [Sonatype Nexus Repository](https://www.sonatype.com/products/sonatype-nexus-repository) - Manage binaries and build artifacts across your software supply chain.
|
||||||
|
|
||||||
## Development with Docker
|
## Development with Docker
|
||||||
|
|
||||||
### API Client
|
### API Client
|
||||||
|
|
||||||
- [ahab](https://github.com/instacart/ahab) - Docker event handling with Python by [instacart](https://github.com/instacart)
|
- [ahab](https://github.com/instacart/ahab) :ice_cube: - Docker event handling with Python.
|
||||||
- [contajners](https://github.com/lispyclouds/contajners) - An idiomatic, data-driven, REPL friendly Clojure client for OCI container engines. By [lispyclouds][lispyclouds]
|
- [contajners](https://github.com/lispyclouds/contajners) - An idiomatic, data-driven, REPL friendly Clojure client for OCI container engines. By [lispyclouds][lispyclouds].
|
||||||
- [Docker Client for JVM](https://github.com/gesellix/docker-client) - A Docker remote api client library for the JVM, written in Groovy by [gesellix][gesellix]
|
- [Docker Client for JVM](https://github.com/gesellix/docker-client) - A Docker remote api client library for the JVM, written in Groovy.
|
||||||
- [Docker Client TypeScript](https://gitlab.com/masaeedu/docker-client) - Docker API client for JavaScript, automatically generated from Swagger API definition from moby repository. By [masaeedu](https://github.com/masaeedu)
|
- [Docker Client TypeScript](https://gitlab.com/masaeedu/docker-client) - Docker API client for JavaScript, automatically generated from Swagger API definition from moby repository. By [masaeedu](https://github.com/masaeedu).
|
||||||
- [docker-controller-bot](https://github.com/dgongut/docker-controller-bot) - Telegram bot to control docker containers. By [dgongut](https://github.com/dgongut/)
|
- [docker-controller-bot](https://github.com/dgongut/docker-controller-bot) - Telegram bot to control docker containers. By [dgongut](https://github.com/dgongut/).
|
||||||
- [docker-it-scala](https://github.com/whisklabs/docker-it-scala) - Docker integration testing kit with Scala by [whisklabs](https://github.com/whisklabs)
|
- [docker-it-scala](https://github.com/whisklabs/docker-it-scala) :ice_cube: - Docker integration testing kit with Scala.
|
||||||
- [docker-java-api](https://github.com/amihaiemil/docker-java-api) - Lightweight, truly object-oriented, Java client for Docker's API. By [amihaiemil](https://github.com/amihaiemil)
|
- [docker-java-api](https://github.com/amihaiemil/docker-java-api) :ice_cube: - Lightweight, truly object-oriented, Java client for Docker's API. By [amihaiemil](https://github.com/amihaiemil).
|
||||||
- [docker-maven-plugin](https://github.com/fabric8io/docker-maven-plugin) - A Maven plugin for running and creating Docker images by [fabric8io](https://github.com/fabric8io)
|
- [docker-maven-plugin](https://github.com/fabric8io/docker-maven-plugin) - A Maven plugin for running and creating Docker images.
|
||||||
- [Docker.DotNet](https://github.com/Microsoft/Docker.DotNet) - C#/.NET HTTP client for the Docker remote API by [ahmetb](https://github.com/ahmetb)
|
- [Docker.DotNet](https://github.com/Microsoft/Docker.DotNet) - C#/.NET HTTP client for the Docker remote API.
|
||||||
- [Docker.Registry.DotNet](https://github.com/ChangemakerStudios/Docker.Registry.DotNet) - .NET (C#) Client Library for interacting with a Docker Registry API (v2) [rquackenbush](https://github.com/rquackenbush)
|
- [Docker.Registry.DotNet](https://github.com/ChangemakerStudios/Docker.Registry.DotNet) - .NET (C#) Client Library for interacting with a Docker Registry API (v2) [rquackenbush](https://github.com/rquackenbush).
|
||||||
- [dockerode](https://github.com/apocas/dockerode) - Docker Remote API node.js module by [apocas](https://github.com/apocas)
|
- [dockerode](https://github.com/apocas/dockerode) - Docker Remote API node.js module.
|
||||||
- [DoMonit](https://github.com/eon01/DoMonit) - A simple Docker Monitoring wrapper For Docker API
|
- [DoMonit](https://github.com/eon01/DoMonit) :ice_cube: - A simple Docker Monitoring wrapper For Docker API.
|
||||||
- [go-dockerclient](https://github.com/fsouza/go-dockerclient/) - Go HTTP client for the Docker remote API by [fsouza](https://github.com/fsouza/)
|
- [go-dockerclient](https://github.com/fsouza/go-dockerclient/) - Go HTTP client for the Docker remote API.
|
||||||
- [Gradle Docker plugin](https://github.com/gesellix/gradle-docker-plugin) - A Docker remote api plugin for Gradle by [gesellix][gesellix]
|
- [Gradle Docker plugin](https://github.com/gesellix/gradle-docker-plugin) - A Docker remote api plugin for Gradle.
|
||||||
- [Portainer stack utils](https://github.com/greenled/portainer-stack-utils) :construction: - Bash script to deploy/update/undeploy Docker stacks in a Portainer instance from a docker-compose yaml file. By [greenled](https://github.com/greenled).
|
- [Portainer stack utils](https://github.com/greenled/portainer-stack-utils) - Bash script to deploy/update/undeploy Docker stacks in a Portainer instance from a docker-compose yaml file. By [greenled](https://github.com/greenled).
|
||||||
- [sbt-docker](https://github.com/marcuslonnberg/sbt-docker) - Create Docker images directly from sbt by [marcuslonnberg](https://github.com/marcuslonnberg)
|
- [sbt-docker](https://github.com/marcuslonnberg/sbt-docker) - Create Docker images directly from sbt.
|
||||||
|
|
||||||
### CI/CD
|
### CI/CD
|
||||||
|
|
||||||
- [Buddy :heavy_dollar_sign:](https://buddy.works) - The best of Git, build & deployment tools combined into one powerful tool that supercharged our development.
|
- [Buddy :yen:](https://buddy.works) - The best of Git, build & deployment tools combined into one powerful tool that supercharged our development.
|
||||||
- [Captain](https://github.com/harbur/captain) - Convert your Git workflow to Docker containers ready for Continuous Delivery by [harbur](https://github.com/harbur).
|
- [Captain](https://github.com/harbur/captain) - Convert your Git workflow to Docker containers ready for Continuous Delivery.
|
||||||
- [Cyclone](https://github.com/caicloud/cyclone) - Powerful workflow engine and end-to-end pipeline solutions implemented with native Kubernetes resources by [caicloud](https://github.com/caicloud).
|
- [Cyclone](https://github.com/caicloud/cyclone) :ice_cube: - Powerful workflow engine and end-to-end pipeline solutions implemented with native Kubernetes resources.
|
||||||
- [Defang](https://github.com/DefangLabs/defang) - Deploy Docker Compose to your favorite cloud in minutes by [DefangLabs](https://github.com/DefangLabs)
|
- [Defang](https://github.com/DefangLabs/defang) - Deploy Docker Compose to your favorite cloud in minutes.
|
||||||
- [Depot :heavy_dollar_sign:](https://depot.dev) - Build Docker images fast, in the cloud. Blazing fast compute, automatic intelligent caching, and zero configuration. [Done in seconds](https://depot.dev/#benchmarks).
|
- [Depot :yen:](https://depot.dev) - Build Docker images fast, in the cloud. Blazing fast compute, automatic intelligent caching, and zero configuration. [Done in seconds](https://depot.dev/#benchmarks).
|
||||||
- [Diun](https://github.com/crazy-max/diun) - Receive notifications when an image or repository is updated on a Docker registry by [crazy-max].
|
- [Diun](https://github.com/crazy-max/diun) - Receive notifications when an image or repository is updated on a Docker registry by [crazy-max].
|
||||||
- [dockcheck](https://github.com/mag37/dockcheck) - A script checking updates for docker images without pulling then auto-update selected/all containers. With notifications, pruning and more.
|
- [dockcheck](https://github.com/mag37/dockcheck) - A script checking updates for docker images without pulling then auto-update selected/all containers. With notifications, pruning and more.
|
||||||
- [Docker plugin for Jenkins](https://github.com/jenkinsci/docker-plugin/) - The aim of the docker plugin is to be able to use a docker host to dynamically provision a slave, run a single build, then tear-down that slave.
|
- [Docker plugin for Jenkins](https://github.com/jenkinsci/docker-plugin/) - The aim of the docker plugin is to be able to use a docker host to dynamically provision a slave, run a single build, then tear-down that slave.
|
||||||
@@ -539,139 +555,136 @@ Services to securely store your Docker images.
|
|||||||
- [Gantry](https://github.com/shizunge/gantry) - Automatically update selected Docker swarm services.
|
- [Gantry](https://github.com/shizunge/gantry) - Automatically update selected Docker swarm services.
|
||||||
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) - GitLab has integrated CI to test, build and deploy your code with the use of GitLab runners.
|
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner) - GitLab has integrated CI to test, build and deploy your code with the use of GitLab runners.
|
||||||
- [Jaypore CI](https://github.com/theSage21/jaypore_ci) - Simple, very flexible, powerful CI / CD / automation system configured in Python. Offline and local first.
|
- [Jaypore CI](https://github.com/theSage21/jaypore_ci) - Simple, very flexible, powerful CI / CD / automation system configured in Python. Offline and local first.
|
||||||
- [Kraken CI](https://github.com/Kraken-CI/kraken) - Modern CI/CD, open-source, on-premise system that is highly scalable and focused on testing. One of its executors is Docker. Developed by [Kraken-CI](https://github.com/Kraken-CI).
|
- [Kraken CI](https://github.com/Kraken-CI/kraken) - Modern CI/CD, open-source, on-premise system that is highly scalable and focused on testing. One of its executors is Docker. Developed.
|
||||||
- [Microservices Continuous Deployment](https://github.com/francescou/docker-continuous-deployment) - Continuous deployment of a microservices application.
|
- [Microservices Continuous Deployment](https://github.com/francescou/docker-continuous-deployment) :ice_cube: - Continuous deployment of a microservices application.
|
||||||
- [mu](https://github.com/stelligent/mu) - Tool to configure CI/CD of your container applications via AWS CodePipeline, CodeBuild and ECS [Stelligent](https://github.com/stelligent)
|
- [mu](https://github.com/stelligent/mu) :ice_cube: - Tool to configure CI/CD of your container applications via AWS CodePipeline, CodeBuild and ECS [Stelligent](https://github.com/stelligent).
|
||||||
- [Popper](https://github.com/systemslab/popper) - Github actions workflow (HCL syntax) execution engine.
|
- [Popper](https://github.com/systemslab/popper) :ice_cube: - Github actions workflow (HCL syntax) execution engine.
|
||||||
- [Screwdriver :heavy_dollar_sign:](https://screwdriver.cd/) - Yahoo's OpenSource buildplatform designed for Continous Delivery.
|
- [Screwdriver :yen:](https://screwdriver.cd/) - Yahoo's OpenSource buildplatform designed for Continous Delivery.
|
||||||
- [Skipper](https://github.com/Stratoscale/skipper) - Easily dockerize your Git repository by [Stratoscale](https://github.com/Stratoscale)
|
- [Skipper](https://github.com/Stratoscale/skipper) - Easily dockerize your Git repository.
|
||||||
- [SwarmCI](https://github.com/ghostsquad/swarmci) - Create a distributed, isolated task pipeline in your Docker Swarm.
|
- [SwarmCI](https://github.com/ghostsquad/swarmci) :ice_cube: - Create a distributed, isolated task pipeline in your Docker Swarm.
|
||||||
- [Tekton CD](https://tekton.dev/) - A cloud-native pipeline resource.
|
- [Tekton CD](https://tekton.dev/) - A cloud-native pipeline resource.
|
||||||
- [Watchtower](https://github.com/containrrr/watchtower) - Automatically update running Docker containers
|
|
||||||
|
|
||||||
### Development Environment
|
### Development Environment
|
||||||
|
|
||||||
- [Binci](https://github.com/binci/binci) - Containerize your development workflow. (formerly DevLab by [TechnologyAdvice](https://github.com/TechnologyAdvice))
|
- [Binci](https://github.com/binci/binci) :ice_cube: - Containerize your development workflow. (formerly DevLab by [TechnologyAdvice](https://github.com/TechnologyAdvice)).
|
||||||
- [coder](https://github.com/coder/coder) - remote development machines powered by Terraform or Docker by [coder](https://github.com/coder)
|
- [coder](https://github.com/coder/coder) - Remote development machines powered by Terraform or Docker.
|
||||||
- [construi](https://github.com/lstephen/construi) - Run your builds inside a Docker defined environment by [lstephen](https://github.com/lstephen)
|
- [construi](https://github.com/lstephen/construi) :ice_cube: - Run your builds inside a Docker defined environment.
|
||||||
- [dde](https://github.com/whatwedo/dde) :construction: - Local development environment toolset based on Docker. By [whatwedo](https://github.com/whatwedo)
|
- [dde](https://github.com/whatwedo/dde) - Local development environment toolset based on Docker. By [whatwedo](https://github.com/whatwedo).
|
||||||
- [DIP](https://github.com/bibendi/dip) - CLI utility for straightforward provisioning and interacting with an application configured by docker-compose. By [bibendi](https://github.com/bibendi)
|
- [DIP](https://github.com/bibendi/dip) - CLI utility for straightforward provisioning and interacting with an application configured by docker-compose. By [bibendi](https://github.com/bibendi).
|
||||||
- [dobi](https://github.com/dnephin/dobi) - A build automation tool for Docker applications. By [dnephin](https://github.com/dnephin)
|
- [dobi](https://github.com/dnephin/dobi) :ice_cube: - A build automation tool for Docker applications. By [dnephin](https://github.com/dnephin).
|
||||||
- [Docker Missing Tools](https://github.com/nandoquintana/docker-missing-tools) - A set of bash commands to shortcut typical docker dev-ops. An alternative to creating typical helper scripts like "build.sh" and "deploy.sh" inside code repositories. By [NandoQuintana](https://github.com/nandoquintana).
|
- [Docker Missing Tools](https://github.com/nandoquintana/docker-missing-tools) :ice_cube: - A set of bash commands to shortcut typical docker dev-ops. An alternative to creating typical helper scripts like "build.sh" and "deploy.sh" inside code repositories. By [NandoQuintana](https://github.com/nandoquintana).
|
||||||
- [Docker-Arch](https://github.com/Ph3nol/Docker-Arch) - Generate Web/CLI projects Dockerized development environments, from 1 simple YAML file. By [Ph3nol](https://github.com/ph3nol)
|
- [Docker-Arch](https://github.com/Ph3nol/Docker-Arch) :ice_cube: - Generate Web/CLI projects Dockerized development environments, from 1 simple YAML file. By [Ph3nol](https://github.com/ph3nol).
|
||||||
- [Docker-sync](https://github.com/EugenMayer/docker-sync) - Drastically improves performance ([50-70x](https://github.com/EugenMayer/docker-sync/wiki/4.-Performance)) when using Docker for development on Mac OS X/Windows and Linux while sharing code to the container. By [EugenMayer](https://github.com/EugenMayer)
|
- [Docker-sync](https://github.com/EugenMayer/docker-sync) - Drastically improves performance ([50-70x](https://github.com/EugenMayer/docker-sync/wiki/4.-Performance)) when using Docker for development on Mac OS X/Windows and Linux while sharing code to the container. By [EugenMayer](https://github.com/EugenMayer).
|
||||||
- [docker-vm](https://github.com/shyiko/docker-vm) - Simple and transparent alternative to boot2docker (backed by Vagrant) by [shyiko](https://github.com/shyiko)
|
- [docker-vm](https://github.com/shyiko/docker-vm) :ice_cube: - Simple and transparent alternative to boot2docker (backed by Vagrant).
|
||||||
- [DockerDL](https://github.com/matifali/dockerdl) - Deep Learning Docker Images. Don't waste time setting up a deep learning env when you can get a deep learning environment with everything pre-installed.
|
- [DockerDL](https://github.com/matifali/dockerdl) - Deep Learning Docker Images. Don't waste time setting up a deep learning env when you can get a deep learning environment with everything pre-installed.
|
||||||
- [Eclipse Che](https://github.com/eclipse/che) - Developer workspace server with Docker runtimes, cloud IDE, next-generation Eclipse IDE
|
- [Eclipse Che](https://github.com/eclipse/che) - Developer workspace server with Docker runtimes, cloud IDE, next-generation Eclipse IDE.
|
||||||
- [EnvCLI](https://github.com/EnvCLI/EnvCLI) - Replace your local installation of Node, Go, ... with project-specific docker containers. By [EnvCLI](https://github.com/EnvCLI)
|
- [EnvCLI](https://github.com/EnvCLI/EnvCLI) - Replace your local installation of Node, Go, ... with project-specific docker containers. By [EnvCLI](https://github.com/EnvCLI).
|
||||||
- [ESP32 Linux - Docker builder](https://github.com/hpsaturn/esp32s3-linux) - Container solution to compile Linux and develop it for ESP32 microcontrollers - By [Hpsaturn](https://github.com/hpsaturn)
|
- [ESP32 Linux - Docker builder](https://github.com/hpsaturn/esp32s3-linux) - Container solution to compile Linux and develop it for ESP32 microcontrollers - By [Hpsaturn](https://github.com/hpsaturn).
|
||||||
- [Gebug](https://github.com/moshebe/gebug) - A tool that makes debugging of Dockerized Go applications super easy by enabling Debugger and Hot-Reload features, seamlessly.
|
- [Gebug](https://github.com/moshebe/gebug) - A tool that makes debugging of Dockerized Go applications super easy by enabling Debugger and Hot-Reload features, seamlessly.
|
||||||
- [Kitt](https://github.com/senges/kitt) - A portable and disposable Shell environment, based on Docker and Nix. By [senges](https://github.com/senges)
|
- [Kitt](https://github.com/senges/kitt) :ice_cube: - A portable and disposable Shell environment, based on Docker and Nix. By [senges](https://github.com/senges).
|
||||||
- [Lando](https://github.com/lando/lando) - Lando is for developers who want to quickly specify and painlessly spin up the services and tools needed to develop their projects. By [Tandem](https://www.thinktandem.io/)
|
- [Lando](https://github.com/lando/lando) - Lando is for developers who want to quickly specify and painlessly spin up the services and tools needed to develop their projects. By [Tandem](https://www.thinktandem.io/).
|
||||||
- [Rust Universal Compiler](https://github.com/Peco602/rust-universal-compiler) - Container solution to compile Rust projects for Linux, macOS and Windows. By [Peco602][peco602]
|
- [Rust Universal Compiler](https://github.com/Peco602/rust-universal-compiler) :ice_cube: - Container solution to compile Rust projects for Linux, macOS and Windows. By [Peco602][peco602].
|
||||||
- [uniget](https://github.com/uniget-org/cli) - uni(versal)get, the installer and updater for container tools and beyond (formerly docker-setup). By [nicholasdille](https://github.com/nicholasdille)
|
- [uniget](https://github.com/uniget-org/cli) - Uni(versal)get, the installer and updater for container tools and beyond (formerly docker-setup). By [nicholasdille](https://github.com/nicholasdille).
|
||||||
- [Vagga](https://github.com/tailhook/vagga) - Vagga is a containerisation tool without daemons. It is a fully-userspace container engine inspired by Vagrant and Docker, specialized for development environments by [tailhook](https://github.com/tailhook/)
|
- [Vagga](https://github.com/tailhook/vagga) :ice_cube: - Vagga is a containerisation tool without daemons. It is a fully-userspace container engine inspired by Vagrant and Docker, specialized for development environments.
|
||||||
- [Zsh-in-Docker](https://github.com/deluan/zsh-in-docker) - Install Zsh, Oh-My-Zsh and plugins inside a Docker container with one line! By [Deluan](https://www.deluan.com)
|
- [Zsh-in-Docker](https://github.com/deluan/zsh-in-docker) - Install Zsh, Oh-My-Zsh and plugins inside a Docker container with one line! By [Deluan](https://www.deluan.com).
|
||||||
|
|
||||||
### Garbage Collection
|
### Garbage Collection
|
||||||
|
|
||||||
- [caduc](https://github.com/tjamet/caduc) - A docker garbage collector cleaning stuff you did not use recently
|
- [caduc](https://github.com/tjamet/caduc) :ice_cube: - A docker garbage collector cleaning stuff you did not use recently.
|
||||||
- [Docker Clean](https://github.com/ZZROTDesign/docker-clean) - A script that cleans Docker containers, images and volumes by [zzrotdesign](https://github.com/ZZROTDesign)
|
- [Docker Clean](https://github.com/ZZROTDesign/docker-clean) :ice_cube: - A script that cleans Docker containers, images and volumes.
|
||||||
- [docker_gc](https://github.com/pdacity/docker_gc) - Image for automatic removing unused Docker Swarm objects. Also works just as Docker Service by [pdacity](https://github.com/pdacity)
|
- [docker-custodian](https://github.com/Yelp/docker-custodian) - Keep docker hosts tidy. By [Yelp](https://github.com/Yelp).
|
||||||
- [docker-custodian](https://github.com/Yelp/docker-custodian) - Keep docker hosts tidy. By [Yelp](https://github.com/Yelp)
|
- [docker_gc](https://github.com/pdacity/docker_gc) :ice_cube: - Image for automatic removing unused Docker Swarm objects. Also works just as Docker Service.
|
||||||
- [Docuum](https://github.com/stepchowfun/docuum) - Least recently used (LRU) eviction of Docker images by [stepchowfun](https://github.com/stepchowfun)
|
- [Docuum](https://github.com/stepchowfun/docuum) - Least recently used (LRU) eviction of Docker images.
|
||||||
|
|
||||||
### Serverless
|
### Serverless
|
||||||
|
|
||||||
- [Apache OpenWhisk](https://github.com/apache/openwhisk) - a serverless, open source cloud platform that executes functions in response to events at any scale. By [apache](https://github.com/apache)
|
- [Apache OpenWhisk](https://github.com/apache/openwhisk) - A serverless, open source cloud platform that executes functions in response to events at any scale. By [apache](https://github.com/apache).
|
||||||
- [Funker](https://github.com/bfirsh/funker-example-voting-app) - Functions as Docker containers example voting app. By [bfirsh](https://github.com/bfirsh)
|
- [Funker](https://github.com/bfirsh/funker-example-voting-app) :ice_cube: - Functions as Docker containers example voting app. By [bfirsh](https://github.com/bfirsh).
|
||||||
- [IronFunctions](https://github.com/iron-io/functions) - The serverless microservices platform FaaS (Functions as a Service) which uses Docker containers to run Any language or AWS Lambda functions
|
- [IronFunctions](https://github.com/iron-io/functions) :ice_cube: - The serverless microservices platform FaaS (Functions as a Service) which uses Docker containers to run Any language or AWS Lambda functions.
|
||||||
- [Koyeb](https://www.koyeb.com/) :heavy_dollar_sign: - Koyeb is a developer-friendly serverless platform to deploy apps globally. Seamlessly run Docker containers, web apps, and APIs with git-based deployment, native autoscaling, a global edge network, and built-in service mesh and discovery.
|
- [Koyeb](https://www.koyeb.com/) - :yen: Koyeb is a developer-friendly serverless platform to deploy apps globally. Seamlessly run Docker containers, web apps, and APIs with git-based deployment, native autoscaling, a global edge network, and built-in service mesh and discovery.
|
||||||
- [OpenFaaS](https://github.com/openfaas/faas) - A complete serverless functions framework for Docker and Kubernetes. By [OpenFaaS](https://github.com/openfaas)
|
- [OpenFaaS](https://github.com/openfaas/faas) - A complete serverless functions framework for Docker and Kubernetes. By [OpenFaaS](https://github.com/openfaas).
|
||||||
- [SCAR](https://github.com/grycap/scar) - Serverless Container-aware Architectures (SCAR) is a serverless framework that allows easy deployment and execution of containers (e.g. Docker) in Serverless environments (e.g. Lambda) by [grycap](https://github.com/grycap)
|
|
||||||
|
|
||||||
|
- [SCAR](https://github.com/grycap/scar) :ice_cube: - Serverless Container-aware Architectures (SCAR) is a serverless framework that allows easy deployment and execution of containers (e.g. Docker) in Serverless environments (e.g. Lambda).
|
||||||
### Testing
|
### Testing
|
||||||
|
|
||||||
- [Container Structure Test](https://github.com/GoogleContainerTools/container-structure-test) - A framework to validate the structure of an image by checking the outputs of commands or the contents of the filesystem. By [GoogleContainerTools][googlecontainertools]
|
- [Container Structure Test](https://github.com/GoogleContainerTools/container-structure-test) - A framework to validate the structure of an image by checking the outputs of commands or the contents of the filesystem. By [GoogleContainerTools][googlecontainertools].
|
||||||
- [dgoss](https://github.com/aelsabbahy/goss/tree/master/extras/dgoss) - A fast YAML based tool for validating docker containers.
|
- [dgoss](https://github.com/goss-org/goss/tree/master/extras/dgoss) - A fast YAML based tool for validating docker containers.
|
||||||
- [DockerSpec](https://github.com/zuazo/dockerspec) - A small Ruby Gem to run RSpec and Serverspec, Infrataster and Capybara tests against Dockerfiles or Docker images easily. By [zuazo](https://github.com/zuazo)
|
- [DockerSpec](https://github.com/zuazo/dockerspec) :ice_cube: - A small Ruby Gem to run RSpec and Serverspec, Infrataster and Capybara tests against Dockerfiles or Docker images easily. By [zuazo](https://github.com/zuazo).
|
||||||
- [EZDC](https://github.com/lynchborg/ezdc) - Golang test harness for easily setting up tests that rely on services in a docker-compose.yml. By [byrnedo]
|
- [EZDC](https://github.com/lynchborg/ezdc) :ice_cube: - Golang test harness for easily setting up tests that rely on services in a docker-compose.yml. By [byrnedo].
|
||||||
- [InSpec][inspec] - InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements. By [chef](https://github.com/chef)
|
- [InSpec][inspec] - InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements. By [chef](https://github.com/chef)
|
||||||
- [Kurtosis](https://github.com/kurtosis-tech/kurtosis) - A composable build system for multi-container test environments that provides developers with: a powerful Python-like SDK for environment configuration, a compile-time validator to verify environment behavior & setup, and a runtime for environment execution, monitoring, & debugging capabilities. By [Kurtosis](https://www.kurtosis.com/)
|
- [Kurtosis](https://github.com/kurtosis-tech/kurtosis) - A composable build system for multi-container test environments that provides developers with: a powerful Python-like SDK for environment configuration, a compile-time validator to verify environment behavior & setup, and a runtime for environment execution, monitoring, & debugging capabilities. By [Kurtosis](https://www.kurtosis.com/).
|
||||||
- [Pull Dog](https://github.com/apps/pull-dog) - A GitHub app that automatically creates Docker-based test environments for your pull requests, from your docker-compose files. Not open source.
|
- [Pull Dog](https://github.com/apps/pull-dog) - A GitHub app that automatically creates Docker-based test environments for your pull requests, from your docker-compose files. Not open source.
|
||||||
- [Pumba](https://github.com/alexei-led/pumba) - Chaos testing tool for Docker. Can be deployed on kubernetes and CoreOS cluster. By [alexei-led](https://github.com/alexei-led)
|
- [Pumba](https://github.com/alexei-led/pumba) - Chaos testing tool for Docker. Can be deployed on kubernetes and CoreOS cluster. By [alexei-led](https://github.com/alexei-led).
|
||||||
|
|
||||||
### Wrappers
|
### Wrappers
|
||||||
|
|
||||||
- [Ansible](https://docs.ansible.com/ansible/latest/collections/community/general/docker_container_module.html) - Manage the life cycle of Docker containers. By RedHat
|
- [Ansible](https://docs.ansible.com/projects/ansible/latest/collections/community/general/docker_container_module.html) - Manage the life cycle of Docker containers. By RedHat.
|
||||||
- [dexec](https://github.com/docker-exec/dexec) - Command line interface written in Go for running code with Docker Exec images.
|
- [dexec](https://github.com/docker-exec/dexec) :ice_cube: - Command line interface written in Go for running code with Docker Exec images.
|
||||||
- [dockerized](https://github.com/benzaita/dockerized-cli) - Seamlessly execute commands in a container.
|
- [dockerized](https://github.com/benzaita/dockerized-cli) :ice_cube: - Seamlessly execute commands in a container.
|
||||||
- [Dray](https://github.com/CenturyLinkLabs/dray) - An engine for managing the execution of container-based workflows by [CenturyLinkLabs][centurylinklabs]
|
- [Dray](https://github.com/CenturyLinkLabs/dray) :ice_cube: - An engine for managing the execution of container-based workflows.
|
||||||
- [Hokusai](https://github.com/artsy/hokusai) - A Docker + Kubernetes CLI for application developers; used to containerize an application and to manage its lifecycle throughout development, testing, and release cycles. From [artsy](https://github.com/artsy)
|
- [Hokusai](https://github.com/artsy/hokusai) - A Docker + Kubernetes CLI for application developers; used to containerize an application and to manage its lifecycle throughout development, testing, and release cycles. From [artsy](https://github.com/artsy).
|
||||||
- [Preevy](https://github.com/livecycle/preevy) - Preview environments for Docker and Docker Compose projects. Test your changes and get feedback from devs and non-devs (Product/Design) by deploying pull requests to the your cloud provider as part of your CI pipeline.
|
- [Preevy](https://github.com/livecycle/preevy) - Preview environments for Docker and Docker Compose projects. Test your changes and get feedback from devs and non-devs (Product/Design) by deploying pull requests to the your cloud provider as part of your CI pipeline.
|
||||||
- [Shutit](https://github.com/ianmiell/shutit) - Tool for building and maintaining complex Docker deployments by [ianmiell](https://github.com/ianmiell)
|
- [Shutit](https://github.com/ianmiell/shutit) :ice_cube: - Tool for building and maintaining complex Docker deployments.
|
||||||
- [subuser](https://github.com/subuser-security/subuser) - Makes it easy to securely and portably run graphical desktop applications in Docker
|
- [subuser](https://github.com/subuser-security/subuser) - Makes it easy to securely and portably run graphical desktop applications in Docker.
|
||||||
- [Terraform cloud-init config](https://github.com/christippett/terraform-cloudinit-container-server) - Terraform module for deploying a single Docker image or `docker-compose.yaml` file to any Cloud™ VM
|
- [Terraform cloud-init config](https://github.com/christippett/terraform-cloudinit-container-server) :ice_cube: - Terraform module for deploying a single Docker image or `docker-compose.yaml` file to any Cloud™ VM.
|
||||||
- [Turbo](https://github.com/ramitsurana/turbo) - Simple and Powerful utility for docker. By [ramitsurana][ramitsurana]
|
- [Turbo](https://github.com/ramitsurana/turbo) :ice_cube: - Simple and Powerful utility for docker. By [ramitsurana][ramitsurana].
|
||||||
- [udocker](https://github.com/indigo-dc/udocker) - A tool to execute simple docker containers in batch or interactive systems without root privileges by [inidigo-dc](https://github.com/indigo-dc)
|
- [udocker](https://github.com/indigo-dc/udocker) - A tool to execute simple docker containers in batch or interactive systems without root privileges.
|
||||||
- [Vagrant - Docker provider](https://developer.hashicorp.com/vagrant/docs/providers/docker/basics) - Good starting point is [vagrant-docker-example](https://github.com/bubenkoff/vagrant-docker-example) by [bubenkoff](https://github.com/bubenkoff)
|
- [Vagrant - Docker provider](https://developer.hashicorp.com/vagrant/docs/providers/docker/basics) - Good starting point is [vagrant-docker-example](https://github.com/bubenkoff/vagrant-docker-example).
|
||||||
|
|
||||||
## Services based on Docker (mostly :heavy_dollar_sign:)
|
## Services based on Docker (mostly :yen:)
|
||||||
|
|
||||||
### CI Services
|
### CI Services
|
||||||
|
|
||||||
- [CircleCI](https://circleci.com/) :heavy_dollar_sign: - Push or pull Docker images from your build environment, or build and run containers right on CircleCI.
|
- [CircleCI](https://circleci.com/) - :yen: Push or pull Docker images from your build environment, or build and run containers right on CircleCI.
|
||||||
- [CodeFresh](https://codefresh.io) :heavy_dollar_sign: - Everything you need to build, test, and share your Docker applications. Provides automated end to end testing.
|
- [CodeFresh](https://codefresh.io) - :yen: Everything you need to build, test, and share your Docker applications. Provides automated end to end testing.
|
||||||
- [CodeShip](https://www.cloudbees.com/products/codeship) :heavy_dollar_sign: - Work with your established Docker workflows while automating your testing and deployment tasks with our hosted platform dedicated to speed and security.
|
- [CodeShip](https://www.cloudbees.com/blog/how-to-run-codeship-parallel-test-pipelines-efficiently-for-optimal-ci-parallelization) - :yen: Work with your established Docker workflows while automating your testing and deployment tasks with our hosted platform dedicated to speed and security.
|
||||||
- [ConcourseCI](https://concourse-ci.org) :heavy_dollar_sign: - A CI SaaS platform for developers and DevOps teams pipeline oriented.
|
- [ConcourseCI](https://concourse-ci.org) - :yen: A CI SaaS platform for developers and DevOps teams pipeline oriented.
|
||||||
- [Semaphore CI](https://semaphore.io/) :heavy_dollar_sign: — A high-performance cloud solution that makes it easy to build, test and ship your containers to production.
|
- [Semaphore CI](https://semaphore.io/) :yen: — A high-performance cloud solution that makes it easy to build, test and ship your containers to production.
|
||||||
- [TravisCI](https://www.travis-ci.com/) :heavy_dollar_sign: - A Free github projects continuous integration Saas platform for developers and Devops.
|
- [TravisCI](https://www.travis-ci.com/) - :yen: A Free github projects continuous integration Saas platform for developers and Devops.
|
||||||
|
|
||||||
### CaaS
|
### CaaS
|
||||||
|
|
||||||
- [Amazon ECS](https://aws.amazon.com/ecs/) :heavy_dollar_sign: - A management service on EC2 that supports Docker containers.
|
- [Amazon ECS](https://aws.amazon.com/ecs/) - :yen: A management service on EC2 that supports Docker containers.
|
||||||
- [Appfleet](https://appfleet.com/) :heavy_dollar_sign: - Edge platform to deploy and manage containerized services globally. The system will route the traffic to the closest location for lower latency.
|
- [Appfleet](https://appfleet.com/) - :yen: Edge platform to deploy and manage containerized services globally. The system will route the traffic to the closest location for lower latency.
|
||||||
- [Azure AKS](https://azure.microsoft.com/en-us/products/kubernetes-service/) :heavy_dollar_sign: - Simplify Kubernetes management, deployment, and operations. Use a fully managed Kubernetes container orchestration service.
|
- [Azure AKS](https://azure.microsoft.com/en-us/products/kubernetes-service/) - :yen: Simplify Kubernetes management, deployment, and operations. Use a fully managed Kubernetes container orchestration service.
|
||||||
- [Cloud 66](https://www.cloud66.com) :heavy_dollar_sign: - Full-stack hosted container management as a service
|
- [Cloud 66](https://www.cloud66.com) - :yen: Full-stack hosted container management as a service.
|
||||||
- [Giant Swarm](https://www.giantswarm.io/) :heavy_dollar_sign: - Simple microservice infrastructure. Deploy your containers in seconds.
|
- [Giant Swarm](https://www.giantswarm.io/) - :yen: Simple microservice infrastructure. Deploy your containers in seconds.
|
||||||
- [Google Container Engine](https://cloud.google.com/kubernetes-engine/docs/) :heavy_dollar_sign: - Docker containers on Google Cloud Computing powered by [Kubernetes][kubernetes].
|
- [Google Container Engine](https://docs.cloud.google.com/kubernetes-engine/docs) - :yen: Docker containers on Google Cloud Computing powered by [Kubernetes][kubernetes].
|
||||||
- [Mesosphere DC/OS Platform](https://d2iq.com/products/dcos) :heavy_dollar_sign: - Integrated platform for data and containers built on Apache Mesos by [mesosphere](https://d2iq.com)
|
- [Mesosphere DC/OS Platform](https://d2iq.com/products/dcos) - :yen: Integrated platform for data and containers built on Apache Mesos.
|
||||||
- [Red Hat OpenShift Dedicated](https://www.redhat.com/en/technologies/cloud-computing/openshift/dedicated) :heavy_dollar_sign: - Fully-managed Red Hat® OpenShift® service on Amazon Web Services and Google Cloud
|
- [Red Hat OpenShift Dedicated](https://www.redhat.com/en/technologies/cloud-computing/openshift/dedicated) - :yen: Fully-managed Red Hat® OpenShift® service on Amazon Web Services and Google Cloud.
|
||||||
- [Triton](https://www.joyent.com/) :heavy_dollar_sign: - Elastic container-native infrastructure by Joyent.
|
- [Triton](https://www.joyent.com/) - :yen: Elastic container-native infrastructure by Joyent.
|
||||||
- [Virtuozzo Application Platform](https://www.virtuozzo.com/application-platform-partners/) :heavy_dollar_sign: - Deploy and manage your projects with turnkey PaaS across a wide network of reliable service providers
|
|
||||||
|
|
||||||
### Monitoring Services
|
### Monitoring Services
|
||||||
|
|
||||||
- [AppDynamics](https://github.com/Appdynamics/docker-monitoring-extension) - Docker Monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP.
|
- [AppDynamics](https://github.com/Appdynamics/docker-monitoring-extension) - Docker Monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP.
|
||||||
- [Better Stack](https://betterstack.com/community/guides/scaling-docker/) :heavy_dollar_sign: - A Docker-compatible observability stack that delivers robust log aggregation and uptime monitoring capabilities for various software application.
|
- [Better Stack](https://betterstack.com/community/guides/scaling-docker/) - :yen: A Docker-compatible observability stack that delivers robust log aggregation and uptime monitoring capabilities for various software application.
|
||||||
- [Broadcom Docker Monitoring](https://www.broadcom.com/info/aiops/docker-monitoring) :heavy_dollar_sign: - Agile Operations solutions from Broadcom deliver the modern Docker monitoring businesses need to accelerate and optimize the performance of microservices and the dynamic Docker environments running them. Monitor both the Docker environment and apps that run inside them. (former CA Technologies)
|
- [Broadcom Docker Monitoring](https://www.broadcom.com/info/aiops/docker-monitoring) - :yen: Agile Operations solutions from Broadcom deliver the modern Docker monitoring businesses need to accelerate and optimize the performance of microservices and the dynamic Docker environments running them. Monitor both the Docker environment and apps that run inside them. (former CA Technologies).
|
||||||
- [Collecting docker logs and stats with Splunk](https://www.splunk.com/en_us/blog/tips-and-tricks/collecting-docker-logs-and-stats-with-splunk.html)
|
- [Collecting docker logs and stats with Splunk](https://www.splunk.com/en_us/blog/tips-and-tricks/collecting-docker-logs-and-stats-with-splunk.html)
|
||||||
- [Datadog](https://www.datadoghq.com/) :heavy_dollar_sign: - Datadog is a full-stack monitoring service for large-scale cloud environments that aggregates metrics/events from servers, databases, and applications. It includes support for Docker, Kubernetes, and Mesos.
|
- [Datadog](https://www.datadoghq.com/) - :yen: Datadog is a full-stack monitoring service for large-scale cloud environments that aggregates metrics/events from servers, databases, and applications. It includes support for Docker, Kubernetes, and Mesos.
|
||||||
- [DockStat](https://github.com/its4nik/dockstat) :construction: - A full fletched (WIP) Docker management solution featuring plugin support and community integration by [its4nik](https://github.com/its4nik)
|
- [Prometheus](https://prometheus.io/) - :yen: Open-source service monitoring system and time series database.
|
||||||
- [Prometheus](https://prometheus.io/) :heavy_dollar_sign: - Open-source service monitoring system and time series database
|
- [Site24x7](https://www.site24x7.com/docker-monitoring.html) - :yen: Docker Monitoring for DevOps and IT is a SaaS Pay per Host model.
|
||||||
- [Site24x7](https://www.site24x7.com/docker-monitoring.html) :heavy_dollar_sign: - Docker Monitoring for DevOps and IT is a SaaS Pay per Host model
|
- [SPM for Docker](https://github.com/sematext/sematext-agent-docker) :ice_cube: - :yen: Monitoring of host and container metrics, Docker events and logs. Automatic log parser. Anomaly Detection and alerting for metrics and logs. [sematext](https://github.com/sematext).
|
||||||
- [SPM for Docker](https://github.com/sematext/sematext-agent-docker) :heavy_dollar_sign: - Monitoring of host and container metrics, Docker events and logs. Automatic log parser. Anomaly Detection and alerting for metrics and logs. [sematext](https://github.com/sematext)
|
- [Sysdig Monitor](https://www.sysdig.com/products/monitor) - :yen: Sysdig Monitor can be used as either software or a SaaS service to monitor, alert, and troubleshoot containers using system calls. It has container-specific features for Docker and Kubernetes.
|
||||||
- [Sysdig Monitor](https://www.sysdig.com/products/monitor) :heavy_dollar_sign: - Sysdig Monitor can be used as either software or a SaaS service to monitor, alert, and troubleshoot containers using system calls. It has container-specific features for Docker and Kubernetes.
|
|
||||||
|
|
||||||
# Useful Resources
|
# Useful Resources
|
||||||
|
|
||||||
- **[Valuable Docker Links](http://nane.kratzke.pages.mylab.th-luebeck.de/about/blog/2014/08/24/valuable-docker-links/)** High quality articles about docker! **MUST SEE**
|
- **[Valuable Docker Links](http://nane.kratzke.pages.mylab.th-luebeck.de/about/blog/2014/08/24/valuable-docker-links/)** High quality articles about docker! **MUST SEE**
|
||||||
- [Cloud Native Landscape](https://github.com/cncf/landscape)
|
- [Cloud Native Landscape](https://github.com/cncf/landscape)
|
||||||
- [Docker Blog](https://www.docker.com/blog/) - regular updates about Docker, the community and tools
|
- [Docker Blog](https://www.docker.com/blog/) - Regular updates about Docker, the community and tools.
|
||||||
- [Docker Certification](https://intellipaat.com/docker-training-course/?US) :heavy_dollar_sign: will help you to will Learn Docker containerization, running Docker containers, Image creation, Dockerfile, Docker orchestration, security best practices, and more through hands-on projects and case studies and helps to clear Docker Certified Associate.
|
- [Docker Certification](https://intellipaat.com/docker-training-course/?US) :yen: will help you to will Learn Docker containerization, running Docker containers, Image creation, Dockerfile, Docker orchestration, security best practices, and more through hands-on projects and case studies and helps to clear Docker Certified Associate.
|
||||||
|
|
||||||
- [Docker dev bookmarks](https://www.codever.dev/search?q=docker) - use the tag [docker](https://www.codever.dev/bookmarks/t/docker)
|
- [Docker dev bookmarks](https://www.codever.dev/search?q=docker) - Use the tag [docker](https://www.codever.dev/bookmarks/t/docker).
|
||||||
- [Docker in Action, Second Edition](https://www.manning.com/books/docker-in-action-second-edition)
|
- [Docker in Action, Second Edition](https://www.manning.com/books/docker-in-action-second-edition)
|
||||||
- [Docker in Practice, Second Edition](https://www.manning.com/books/docker-in-practice-second-edition)
|
- [Docker in Practice, Second Edition](https://www.manning.com/books/docker-in-practice-second-edition)
|
||||||
- [Docker packaging guide for Python](https://pythonspeed.com/docker/) - a series of detailed articles on the specifics of Docker packaging for Python.
|
- [Docker packaging guide for Python](https://pythonspeed.com/docker/) - A series of detailed articles on the specifics of Docker packaging for Python.
|
||||||
- [Learn Docker in a Month of Lunches](https://www.manning.com/books/learn-docker-in-a-month-of-lunches)
|
- [Learn Docker in a Month of Lunches](https://www.manning.com/books/learn-docker-in-a-month-of-lunches)
|
||||||
- [Learn Docker](https://coursesity.com/blog/best-docker-tutorials/) - Learn Docker - curated list of the top online docker tutorials and courses.
|
- [Learn Docker](https://coursesity.com/blog/best-docker-tutorials/) - Learn Docker - curated list of the top online docker tutorials and courses.
|
||||||
- [Programming Community Curated Resources for learning Docker](https://hackr.io/tutorials/learn-docker)
|
- [Programming Community Curated Resources for learning Docker](https://hackr.io/tutorials/learn-docker)
|
||||||
|
|
||||||
## Awesome Lists
|
## Awesome Lists
|
||||||
|
|
||||||
- [Awesome CI/CD](https://github.com/cicdops/awesome-ciandcd) - Not specific to docker but relevant.
|
- [Awesome CI/CD](https://github.com/cicdops/awesome-ciandcd) :ice_cube: - Not specific to docker but relevant.
|
||||||
- [Awesome Compose](https://github.com/docker/awesome-compose) - Docker Compose samples
|
- [Awesome Compose](https://github.com/docker/awesome-compose) - Docker Compose samples.
|
||||||
- [Awesome Kubernetes](https://github.com/ramitsurana/awesome-kubernetes) by [ramitsurana][ramitsurana]
|
- [Awesome Kubernetes](https://github.com/ramitsurana/awesome-kubernetes) by [ramitsurana][ramitsurana]
|
||||||
- [Awesome Linux Container](https://github.com/Friz-zy/awesome-linux-containers) more general about container than this repo, by [Friz-zy](https://github.com/Friz-zy).
|
- [Awesome Linux Container](https://github.com/Friz-zy/awesome-linux-containers) more general about container than this repo, by [Friz-zy](https://github.com/Friz-zy).
|
||||||
- [Awesome Selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted) list of Free Software network services and web applications which can be hosted locally by running in a classical way (setup local web server and run applications from there) or in a Docker container. By [Kickball](https://github.com/Kickball)
|
- [Awesome Selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted) list of Free Software network services and web applications which can be hosted locally by running in a classical way (setup local web server and run applications from there) or in a Docker container. By [Kickball](https://github.com/Kickball)
|
||||||
@@ -687,9 +700,9 @@ Services to securely store your Docker images.
|
|||||||
## Good Tips
|
## Good Tips
|
||||||
|
|
||||||
- [Docker Caveats](http://docker-saigon.github.io/post/Docker-Caveats/) What You Should Know About Running Docker In Production (written 11 APRIL 2016) **MUST SEE**
|
- [Docker Caveats](http://docker-saigon.github.io/post/Docker-Caveats/) What You Should Know About Running Docker In Production (written 11 APRIL 2016) **MUST SEE**
|
||||||
- [Docker Containers on the Desktop](https://blog.jessfraz.com/post/docker-containers-on-the-desktop/) - The **funniest way** to learn about docker by [jessfraz][jessfraz] who also gave a [presentation](https://www.youtube.com/watch?v=1qlLUf7KtAw) about it @ DockerCon 2015
|
- [Docker Containers on the Desktop](https://blog.jessfraz.com/post/docker-containers-on-the-desktop/) - The **funniest way** to learn about docker by [jessfraz][jessfraz] who also gave a [presentation](https://www.youtube.com/watch?v=1qlLUf7KtAw) about it @ DockerCon 2015.
|
||||||
- [Docker vs. VMs? Combining Both for Cloud Portability Nirvana](https://www.flexera.com/blog/finops/)
|
- [Docker vs. VMs? Combining Both for Cloud Portability Nirvana](https://www.flexera.com/blog/finops/)
|
||||||
- [Dockerfile best practices](https://github.com/hexops/dockerfile) - This repository has best-practices for writing Dockerfiles
|
- [Dockerfile best practices](https://github.com/hexops/dockerfile) :ice_cube: - This repository has best-practices for writing Dockerfiles.
|
||||||
- [Don't Repeat Yourself with Anchors, Aliases and Extensions in Docker Compose Files](https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd) by [King Chung Huang](https://github.com/kinghuang)
|
- [Don't Repeat Yourself with Anchors, Aliases and Extensions in Docker Compose Files](https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd) by [King Chung Huang](https://github.com/kinghuang)
|
||||||
- [GUI Apps with Docker](http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/) by [fgrehm][fgrehm]
|
- [GUI Apps with Docker](http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/) by [fgrehm][fgrehm]
|
||||||
|
|
||||||
@@ -761,13 +774,11 @@ Services to securely store your Docker images.
|
|||||||
|
|
||||||
## Stargazers over time
|
## Stargazers over time
|
||||||
|
|
||||||
[](https://starchart.cc/veggiemonk/awesome-docker)
|
[](https://starchart.cc/veggiemonk/awesome-docker)
|
||||||
|
|
||||||
[contributing]: https://github.com/veggiemonk/awesome-docker/blob/master/.github/CONTRIBUTING.md
|
[contributing]: https://github.com/veggiemonk/awesome-docker/blob/master/.github/CONTRIBUTING.md
|
||||||
[akito]: https://github.com/theAkito
|
[calico]: https://github.com/projectcalico/calico
|
||||||
[centurylinklabs]: https://github.com/CenturyLinkLabs
|
|
||||||
[containx]: https://github.com/ContainX
|
[containx]: https://github.com/ContainX
|
||||||
[containers]: https://github.com/containers
|
|
||||||
[coreos]: https://github.com/coreos
|
[coreos]: https://github.com/coreos
|
||||||
[deepfence]: https://github.com/deepfence
|
[deepfence]: https://github.com/deepfence
|
||||||
[distribution]: https://github.com/docker/distribution
|
[distribution]: https://github.com/docker/distribution
|
||||||
@@ -777,10 +788,7 @@ Services to securely store your Docker images.
|
|||||||
[dozzle]: https://github.com/amir20/dozzle
|
[dozzle]: https://github.com/amir20/dozzle
|
||||||
[editreadme]: https://github.com/veggiemonk/awesome-docker/edit/master/README.md
|
[editreadme]: https://github.com/veggiemonk/awesome-docker/edit/master/README.md
|
||||||
[fgrehm]: https://github.com/fgrehm
|
[fgrehm]: https://github.com/fgrehm
|
||||||
[gesellix]: https://github.com/gesellix
|
|
||||||
[genuinetools]: https://github.com/genuinetools
|
|
||||||
[gliderlabs]: https://github.com/gliderlabs
|
[gliderlabs]: https://github.com/gliderlabs
|
||||||
[google]: https://github.com/google
|
|
||||||
[googlecontainertools]: https://github.com/GoogleContainerTools
|
[googlecontainertools]: https://github.com/GoogleContainerTools
|
||||||
[inspec]: https://github.com/inspec/inspec
|
[inspec]: https://github.com/inspec/inspec
|
||||||
[jessfraz]: https://github.com/jessfraz
|
[jessfraz]: https://github.com/jessfraz
|
||||||
@@ -788,20 +796,18 @@ Services to securely store your Docker images.
|
|||||||
[jwilder]: https://github.com/jwilder
|
[jwilder]: https://github.com/jwilder
|
||||||
[kubernetes]: https://kubernetes.io
|
[kubernetes]: https://kubernetes.io
|
||||||
[lispyclouds]: https://github.com/lispyclouds
|
[lispyclouds]: https://github.com/lispyclouds
|
||||||
[nvidia]: https://github.com/nvidia
|
|
||||||
[nginxproxy]: https://github.com/nginx-proxy/nginx-proxy
|
[nginxproxy]: https://github.com/nginx-proxy/nginx-proxy
|
||||||
[openshift]: https://okd.io/
|
[openshift]: https://okd.io/
|
||||||
[oracle]: https://github.com/oracle
|
|
||||||
[peco602]: https://github.com/Peco602
|
|
||||||
[powerman]: https://github.com/powerman
|
[powerman]: https://github.com/powerman
|
||||||
[progrium]: https://github.com/progrium
|
[progrium]: https://github.com/progrium
|
||||||
[ramitsurana]: https://github.com/ramitsurana
|
[ramitsurana]: https://github.com/ramitsurana
|
||||||
[rancher]: https://github.com/rancher
|
|
||||||
[safe-waters]: https://github.com/safe-waters
|
|
||||||
[sindresorhus]: https://github.com/sindresorhus/awesome
|
[sindresorhus]: https://github.com/sindresorhus/awesome
|
||||||
[spotify]: https://github.com/spotify
|
[spotify]: https://github.com/spotify
|
||||||
[tomastomecek]: https://github.com/TomasTomecek
|
|
||||||
[vegasbrianc]: https://github.com/vegasbrianc
|
[vegasbrianc]: https://github.com/vegasbrianc
|
||||||
[vmware]: https://github.com/vmware
|
[vmware]: https://github.com/vmware
|
||||||
[byrnedo]: https://github.com/byrnedo
|
[byrnedo]: https://github.com/byrnedo
|
||||||
[crazy-max]: https://github.com/crazy-max
|
[crazy-max]: https://github.com/crazy-max
|
||||||
|
[skanehira]: https://github.com/skanehira
|
||||||
|
[akito]: https://github.com/theAkito
|
||||||
|
[peco602]: https://github.com/Peco602
|
||||||
|
[weave]: https://github.com/weaveworks/weave
|
||||||
|
|||||||
51
build.js
51
build.js
@@ -1,51 +0,0 @@
|
|||||||
const fs = require('fs-extra');
|
|
||||||
const cheerio = require('cheerio');
|
|
||||||
const showdown = require('showdown');
|
|
||||||
|
|
||||||
process.env.NODE_ENV = 'production';
|
|
||||||
|
|
||||||
const LOG = {
|
|
||||||
error: (...args) => console.error('❌ ERROR', { ...args }),
|
|
||||||
debug: (...args) => {
|
|
||||||
if (process.env.DEBUG) console.log('💡 DEBUG: ', { ...args });
|
|
||||||
},
|
|
||||||
};
|
|
||||||
const handleFailure = (err) => {
|
|
||||||
LOG.error(err);
|
|
||||||
process.exit(1);
|
|
||||||
};
|
|
||||||
|
|
||||||
process.on('unhandledRejection', handleFailure);
|
|
||||||
|
|
||||||
// --- FILES
|
|
||||||
const README = 'README.md';
|
|
||||||
const WEBSITE_FOLDER = 'website';
|
|
||||||
const indexTemplate = `${WEBSITE_FOLDER}/index.tmpl.html`;
|
|
||||||
const indexDestination = `${WEBSITE_FOLDER}/index.html`;
|
|
||||||
|
|
||||||
async function processIndex() {
|
|
||||||
const converter = new showdown.Converter();
|
|
||||||
converter.setFlavor('github');
|
|
||||||
|
|
||||||
try {
|
|
||||||
LOG.debug('Loading files...', { indexTemplate, README });
|
|
||||||
const template = await fs.readFile(indexTemplate, 'utf8');
|
|
||||||
const markdown = await fs.readFile(README, 'utf8');
|
|
||||||
|
|
||||||
LOG.debug('Merging files...');
|
|
||||||
const $ = cheerio.load(template);
|
|
||||||
$('#md').append(converter.makeHtml(markdown));
|
|
||||||
|
|
||||||
LOG.debug('Writing index.html');
|
|
||||||
await fs.outputFile(indexDestination, $.html(), 'utf8');
|
|
||||||
LOG.debug('DONE 👍');
|
|
||||||
} catch (err) {
|
|
||||||
handleFailure(err);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function main() {
|
|
||||||
await processIndex();
|
|
||||||
}
|
|
||||||
|
|
||||||
main();
|
|
||||||
683
cmd/awesome-docker/main.go
Normal file
683
cmd/awesome-docker/main.go
Normal file
@@ -0,0 +1,683 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/builder"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/checker"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/linter"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/parser"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/scorer"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/tui"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
readmePath = "README.md"
|
||||||
|
excludePath = "config/exclude.yaml"
|
||||||
|
templatePath = "config/website.tmpl.html"
|
||||||
|
healthCachePath = "config/health_cache.yaml"
|
||||||
|
websiteOutput = "website/index.html"
|
||||||
|
version = "0.1.0"
|
||||||
|
)
|
||||||
|
|
||||||
|
type checkSummary struct {
|
||||||
|
ExternalTotal int
|
||||||
|
GitHubTotal int
|
||||||
|
Broken []checker.LinkResult
|
||||||
|
Redirected []checker.LinkResult
|
||||||
|
GitHubErrors []error
|
||||||
|
GitHubSkipped bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
root := &cobra.Command{
|
||||||
|
Use: "awesome-docker",
|
||||||
|
Short: "Quality tooling for the awesome-docker curated list",
|
||||||
|
}
|
||||||
|
|
||||||
|
root.AddCommand(
|
||||||
|
versionCmd(),
|
||||||
|
lintCmd(),
|
||||||
|
checkCmd(),
|
||||||
|
healthCmd(),
|
||||||
|
buildCmd(),
|
||||||
|
reportCmd(),
|
||||||
|
validateCmd(),
|
||||||
|
ciCmd(),
|
||||||
|
browseCmd(),
|
||||||
|
)
|
||||||
|
|
||||||
|
if err := root.Execute(); err != nil {
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func versionCmd() *cobra.Command {
|
||||||
|
return &cobra.Command{
|
||||||
|
Use: "version",
|
||||||
|
Short: "Print version",
|
||||||
|
Run: func(cmd *cobra.Command, args []string) { fmt.Printf("awesome-docker v%s\n", version) },
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseReadme() (parser.Document, error) {
|
||||||
|
f, err := os.Open(readmePath)
|
||||||
|
if err != nil {
|
||||||
|
return parser.Document{}, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
return parser.Parse(f)
|
||||||
|
}
|
||||||
|
|
||||||
|
func collectURLs(sections []parser.Section, urls *[]string) {
|
||||||
|
for _, s := range sections {
|
||||||
|
for _, e := range s.Entries {
|
||||||
|
*urls = append(*urls, e.URL)
|
||||||
|
}
|
||||||
|
collectURLs(s.Children, urls)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type entryMeta struct {
|
||||||
|
Category string
|
||||||
|
Description string
|
||||||
|
}
|
||||||
|
|
||||||
|
func collectEntriesWithCategory(sections []parser.Section, parentPath string, out map[string]entryMeta) {
|
||||||
|
for _, s := range sections {
|
||||||
|
path := s.Title
|
||||||
|
if parentPath != "" {
|
||||||
|
path = parentPath + " > " + s.Title
|
||||||
|
}
|
||||||
|
for _, e := range s.Entries {
|
||||||
|
out[e.URL] = entryMeta{Category: path, Description: e.Description}
|
||||||
|
}
|
||||||
|
collectEntriesWithCategory(s.Children, path, out)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func runLinkChecks(prMode bool) (checkSummary, error) {
|
||||||
|
doc, err := parseReadme()
|
||||||
|
if err != nil {
|
||||||
|
return checkSummary{}, fmt.Errorf("parse: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var urls []string
|
||||||
|
collectURLs(doc.Sections, &urls)
|
||||||
|
|
||||||
|
exclude, err := cache.LoadExcludeList(excludePath)
|
||||||
|
if err != nil {
|
||||||
|
return checkSummary{}, fmt.Errorf("load exclude list: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ghURLs, extURLs := checker.PartitionLinks(urls)
|
||||||
|
|
||||||
|
summary := checkSummary{
|
||||||
|
ExternalTotal: len(extURLs),
|
||||||
|
GitHubTotal: len(ghURLs),
|
||||||
|
}
|
||||||
|
|
||||||
|
results := checker.CheckLinks(extURLs, 10, exclude)
|
||||||
|
for _, r := range results {
|
||||||
|
if !r.OK {
|
||||||
|
summary.Broken = append(summary.Broken, r)
|
||||||
|
}
|
||||||
|
if r.Redirected {
|
||||||
|
summary.Redirected = append(summary.Redirected, r)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if prMode {
|
||||||
|
summary.GitHubSkipped = true
|
||||||
|
return summary, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
token := os.Getenv("GITHUB_TOKEN")
|
||||||
|
if token == "" {
|
||||||
|
summary.GitHubSkipped = true
|
||||||
|
return summary, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
gc := checker.NewGitHubChecker(token)
|
||||||
|
_, errs := gc.CheckRepos(context.Background(), ghURLs, 50)
|
||||||
|
summary.GitHubErrors = errs
|
||||||
|
return summary, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runHealth(ctx context.Context) error {
|
||||||
|
token := os.Getenv("GITHUB_TOKEN")
|
||||||
|
if token == "" {
|
||||||
|
return fmt.Errorf("GITHUB_TOKEN environment variable is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
doc, err := parseReadme()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parse: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var urls []string
|
||||||
|
collectURLs(doc.Sections, &urls)
|
||||||
|
ghURLs, _ := checker.PartitionLinks(urls)
|
||||||
|
|
||||||
|
fmt.Printf("Scoring %d GitHub repositories...\n", len(ghURLs))
|
||||||
|
gc := checker.NewGitHubChecker(token)
|
||||||
|
infos, errs := gc.CheckRepos(ctx, ghURLs, 50)
|
||||||
|
for _, e := range errs {
|
||||||
|
fmt.Printf(" error: %v\n", e)
|
||||||
|
}
|
||||||
|
if len(infos) == 0 {
|
||||||
|
if len(errs) > 0 {
|
||||||
|
return fmt.Errorf("failed to fetch GitHub metadata for all repositories (%d errors); check network/DNS and GITHUB_TOKEN", len(errs))
|
||||||
|
}
|
||||||
|
return fmt.Errorf("no GitHub repositories found in README")
|
||||||
|
}
|
||||||
|
|
||||||
|
scored := scorer.ScoreAll(infos)
|
||||||
|
|
||||||
|
meta := make(map[string]entryMeta)
|
||||||
|
collectEntriesWithCategory(doc.Sections, "", meta)
|
||||||
|
for i := range scored {
|
||||||
|
if m, ok := meta[scored[i].URL]; ok {
|
||||||
|
scored[i].Category = m.Category
|
||||||
|
scored[i].Description = m.Description
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cacheEntries := scorer.ToCacheEntries(scored)
|
||||||
|
|
||||||
|
hc, err := cache.LoadHealthCache(healthCachePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("load cache: %w", err)
|
||||||
|
}
|
||||||
|
hc.Merge(cacheEntries)
|
||||||
|
if err := cache.SaveHealthCache(healthCachePath, hc); err != nil {
|
||||||
|
return fmt.Errorf("save cache: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Cache updated: %d entries in %s\n", len(hc.Entries), healthCachePath)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func scoredFromCache() ([]scorer.ScoredEntry, error) {
|
||||||
|
hc, err := cache.LoadHealthCache(healthCachePath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("load cache: %w", err)
|
||||||
|
}
|
||||||
|
if len(hc.Entries) == 0 {
|
||||||
|
return nil, fmt.Errorf("no cache data, run 'health' first")
|
||||||
|
}
|
||||||
|
|
||||||
|
scored := make([]scorer.ScoredEntry, 0, len(hc.Entries))
|
||||||
|
for _, e := range hc.Entries {
|
||||||
|
scored = append(scored, scorer.ScoredEntry{
|
||||||
|
URL: e.URL,
|
||||||
|
Name: e.Name,
|
||||||
|
Status: scorer.Status(e.Status),
|
||||||
|
Stars: e.Stars,
|
||||||
|
Forks: e.Forks,
|
||||||
|
HasLicense: e.HasLicense,
|
||||||
|
LastPush: e.LastPush,
|
||||||
|
Category: e.Category,
|
||||||
|
Description: e.Description,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return scored, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func markdownReportFromCache() (string, error) {
|
||||||
|
scored, err := scoredFromCache()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return scorer.GenerateReport(scored), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeGitHubOutput(path, key, value string) error {
|
||||||
|
if path == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("open github output file: %w", err)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
if _, err := fmt.Fprintf(f, "%s=%s\n", key, value); err != nil {
|
||||||
|
return fmt.Errorf("write github output: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func sanitizeOutputValue(v string) string {
|
||||||
|
v = strings.ReplaceAll(v, "\n", " ")
|
||||||
|
v = strings.ReplaceAll(v, "\r", " ")
|
||||||
|
return strings.TrimSpace(v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildBrokenLinksIssueBody(summary checkSummary, runErr error) string {
|
||||||
|
var b strings.Builder
|
||||||
|
b.WriteString("# Broken Links Detected\n\n")
|
||||||
|
|
||||||
|
if runErr != nil {
|
||||||
|
b.WriteString("The link checker failed to execute cleanly.\n\n")
|
||||||
|
b.WriteString("## Failure\n\n")
|
||||||
|
fmt.Fprintf(&b, "- %s\n\n", runErr)
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(&b, "- Broken links: %d\n", len(summary.Broken))
|
||||||
|
fmt.Fprintf(&b, "- Redirected links: %d\n", len(summary.Redirected))
|
||||||
|
fmt.Fprintf(&b, "- GitHub API errors: %d\n\n", len(summary.GitHubErrors))
|
||||||
|
|
||||||
|
if len(summary.Broken) > 0 {
|
||||||
|
b.WriteString("## Broken Links\n\n")
|
||||||
|
for _, r := range summary.Broken {
|
||||||
|
fmt.Fprintf(&b, "- `%s` -> `%d %s`\n", r.URL, r.StatusCode, strings.TrimSpace(r.Error))
|
||||||
|
}
|
||||||
|
b.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(summary.GitHubErrors) > 0 {
|
||||||
|
b.WriteString("## GitHub API Errors\n\n")
|
||||||
|
for _, e := range summary.GitHubErrors {
|
||||||
|
fmt.Fprintf(&b, "- `%s`\n", e)
|
||||||
|
}
|
||||||
|
b.WriteString("\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
b.WriteString("## Action Required\n\n")
|
||||||
|
b.WriteString("- Update the URL if the resource moved\n")
|
||||||
|
b.WriteString("- Remove the entry if permanently unavailable\n")
|
||||||
|
b.WriteString("- Add to `config/exclude.yaml` if a known false positive\n")
|
||||||
|
b.WriteString("- Investigate GitHub API/auth failures when present\n\n")
|
||||||
|
b.WriteString("---\n")
|
||||||
|
b.WriteString("*Auto-generated by awesome-docker ci broken-links*\n")
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildHealthReportIssueBody(report string, healthErr error) string {
|
||||||
|
var b strings.Builder
|
||||||
|
if healthErr != nil {
|
||||||
|
b.WriteString("WARNING: health refresh failed in this run; showing latest cached report.\n\n")
|
||||||
|
fmt.Fprintf(&b, "Error: `%s`\n\n", healthErr)
|
||||||
|
}
|
||||||
|
b.WriteString(report)
|
||||||
|
if !strings.HasSuffix(report, "\n") {
|
||||||
|
b.WriteString("\n")
|
||||||
|
}
|
||||||
|
b.WriteString("\n---\n")
|
||||||
|
b.WriteString("*Auto-generated weekly by awesome-docker ci health-report*\n")
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func lintCmd() *cobra.Command {
|
||||||
|
var fix bool
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "lint",
|
||||||
|
Short: "Validate README formatting",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
doc, err := parseReadme()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parse: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := linter.Lint(doc)
|
||||||
|
for _, issue := range result.Issues {
|
||||||
|
fmt.Println(issue)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.Errors > 0 {
|
||||||
|
fmt.Printf("\n%d errors, %d warnings\n", result.Errors, result.Warnings)
|
||||||
|
if !fix {
|
||||||
|
return fmt.Errorf("lint failed with %d errors", result.Errors)
|
||||||
|
}
|
||||||
|
count, err := linter.FixFile(readmePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("fix: %w", err)
|
||||||
|
}
|
||||||
|
fmt.Printf("Fixed %d lines in %s\n", count, readmePath)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("OK: %d warnings\n", result.Warnings)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
cmd.Flags().BoolVar(&fix, "fix", false, "Auto-fix formatting issues")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkCmd() *cobra.Command {
|
||||||
|
var prMode bool
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "check",
|
||||||
|
Short: "Check links for reachability",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
summary, err := runLinkChecks(prMode)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Checking %d external links...\n", summary.ExternalTotal)
|
||||||
|
if !prMode {
|
||||||
|
if summary.GitHubSkipped {
|
||||||
|
fmt.Println("GITHUB_TOKEN not set, skipping GitHub repo checks")
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Checking %d GitHub repositories...\n", summary.GitHubTotal)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, e := range summary.GitHubErrors {
|
||||||
|
fmt.Printf(" GitHub error: %v\n", e)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(summary.Redirected) > 0 {
|
||||||
|
fmt.Printf("\n%d redirected links (consider updating):\n", len(summary.Redirected))
|
||||||
|
for _, r := range summary.Redirected {
|
||||||
|
fmt.Printf(" %s -> %s\n", r.URL, r.RedirectURL)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(summary.Broken) > 0 {
|
||||||
|
fmt.Printf("\n%d broken links:\n", len(summary.Broken))
|
||||||
|
for _, r := range summary.Broken {
|
||||||
|
fmt.Printf(" %s -> %d %s\n", r.URL, r.StatusCode, r.Error)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(summary.Broken) > 0 && len(summary.GitHubErrors) > 0 {
|
||||||
|
return fmt.Errorf("found %d broken links and %d GitHub API errors", len(summary.Broken), len(summary.GitHubErrors))
|
||||||
|
}
|
||||||
|
if len(summary.Broken) > 0 {
|
||||||
|
return fmt.Errorf("found %d broken links", len(summary.Broken))
|
||||||
|
}
|
||||||
|
if len(summary.GitHubErrors) > 0 {
|
||||||
|
return fmt.Errorf("github checks failed with %d errors", len(summary.GitHubErrors))
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("All links OK")
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
cmd.Flags().BoolVar(&prMode, "pr", false, "PR mode: skip GitHub API checks")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func healthCmd() *cobra.Command {
|
||||||
|
return &cobra.Command{
|
||||||
|
Use: "health",
|
||||||
|
Short: "Score repository health and update cache",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
return runHealth(context.Background())
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildCmd() *cobra.Command {
|
||||||
|
return &cobra.Command{
|
||||||
|
Use: "build",
|
||||||
|
Short: "Generate website from README",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
if err := builder.Build(readmePath, templatePath, websiteOutput); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
fmt.Printf("Website built: %s\n", websiteOutput)
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func reportCmd() *cobra.Command {
|
||||||
|
var jsonOutput bool
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "report",
|
||||||
|
Short: "Generate health report from cache",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
scored, err := scoredFromCache()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
payload, err := scorer.GenerateJSONReport(scored)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("json report: %w", err)
|
||||||
|
}
|
||||||
|
fmt.Println(string(payload))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
report := scorer.GenerateReport(scored)
|
||||||
|
fmt.Print(report)
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.Flags().BoolVar(&jsonOutput, "json", false, "Output full health report as JSON")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func validateCmd() *cobra.Command {
|
||||||
|
return &cobra.Command{
|
||||||
|
Use: "validate",
|
||||||
|
Short: "PR validation: lint + check --pr",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
fmt.Println("=== Linting ===")
|
||||||
|
doc, err := parseReadme()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parse: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := linter.Lint(doc)
|
||||||
|
for _, issue := range result.Issues {
|
||||||
|
fmt.Println(issue)
|
||||||
|
}
|
||||||
|
if result.Errors > 0 {
|
||||||
|
fmt.Printf("\n%d errors, %d warnings\n", result.Errors, result.Warnings)
|
||||||
|
return fmt.Errorf("lint failed with %d errors", result.Errors)
|
||||||
|
}
|
||||||
|
fmt.Printf("Lint OK: %d warnings\n", result.Warnings)
|
||||||
|
|
||||||
|
fmt.Println("\n=== Checking links (PR mode) ===")
|
||||||
|
summary, err := runLinkChecks(true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
fmt.Printf("Checking %d external links...\n", summary.ExternalTotal)
|
||||||
|
if len(summary.Broken) > 0 {
|
||||||
|
fmt.Printf("\n%d broken links:\n", len(summary.Broken))
|
||||||
|
for _, r := range summary.Broken {
|
||||||
|
fmt.Printf(" %s -> %d %s\n", r.URL, r.StatusCode, r.Error)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("found %d broken links", len(summary.Broken))
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("\nValidation passed")
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ciCmd() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "ci",
|
||||||
|
Short: "CI-oriented helper commands",
|
||||||
|
}
|
||||||
|
cmd.AddCommand(
|
||||||
|
ciBrokenLinksCmd(),
|
||||||
|
ciHealthReportCmd(),
|
||||||
|
)
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func ciBrokenLinksCmd() *cobra.Command {
|
||||||
|
var issueFile string
|
||||||
|
var githubOutput string
|
||||||
|
var strict bool
|
||||||
|
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "broken-links",
|
||||||
|
Short: "Run link checks and emit CI outputs/artifacts",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
summary, runErr := runLinkChecks(false)
|
||||||
|
|
||||||
|
hasErrors := runErr != nil || len(summary.Broken) > 0 || len(summary.GitHubErrors) > 0
|
||||||
|
exitCode := 0
|
||||||
|
if hasErrors {
|
||||||
|
exitCode = 1
|
||||||
|
}
|
||||||
|
if runErr != nil {
|
||||||
|
exitCode = 2
|
||||||
|
}
|
||||||
|
|
||||||
|
if issueFile != "" && hasErrors {
|
||||||
|
body := buildBrokenLinksIssueBody(summary, runErr)
|
||||||
|
if err := os.WriteFile(issueFile, []byte(body), 0o644); err != nil {
|
||||||
|
return fmt.Errorf("write issue file: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := writeGitHubOutput(githubOutput, "has_errors", strconv.FormatBool(hasErrors)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := writeGitHubOutput(githubOutput, "check_exit_code", strconv.Itoa(exitCode)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := writeGitHubOutput(githubOutput, "broken_count", strconv.Itoa(len(summary.Broken))); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := writeGitHubOutput(githubOutput, "github_error_count", strconv.Itoa(len(summary.GitHubErrors))); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if runErr != nil {
|
||||||
|
if err := writeGitHubOutput(githubOutput, "run_error", sanitizeOutputValue(runErr.Error())); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if runErr != nil {
|
||||||
|
fmt.Printf("CI broken-links run error: %v\n", runErr)
|
||||||
|
}
|
||||||
|
if hasErrors {
|
||||||
|
fmt.Printf("CI broken-links found %d broken links and %d GitHub errors\n", len(summary.Broken), len(summary.GitHubErrors))
|
||||||
|
} else {
|
||||||
|
fmt.Println("CI broken-links found no errors")
|
||||||
|
}
|
||||||
|
|
||||||
|
if strict {
|
||||||
|
if runErr != nil {
|
||||||
|
return runErr
|
||||||
|
}
|
||||||
|
if hasErrors {
|
||||||
|
return fmt.Errorf("found %d broken links and %d GitHub API errors", len(summary.Broken), len(summary.GitHubErrors))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&issueFile, "issue-file", "broken_links_issue.md", "Path to write issue markdown body")
|
||||||
|
cmd.Flags().StringVar(&githubOutput, "github-output", "", "Path to GitHub output file (typically $GITHUB_OUTPUT)")
|
||||||
|
cmd.Flags().BoolVar(&strict, "strict", false, "Return non-zero when errors are found")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func ciHealthReportCmd() *cobra.Command {
|
||||||
|
var issueFile string
|
||||||
|
var githubOutput string
|
||||||
|
var strict bool
|
||||||
|
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "health-report",
|
||||||
|
Short: "Refresh health cache, render report, and emit CI outputs/artifacts",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
healthErr := runHealth(context.Background())
|
||||||
|
report, reportErr := markdownReportFromCache()
|
||||||
|
|
||||||
|
healthOK := healthErr == nil
|
||||||
|
reportOK := reportErr == nil
|
||||||
|
hasReport := reportOK && strings.TrimSpace(report) != ""
|
||||||
|
hasErrors := !healthOK || !reportOK
|
||||||
|
|
||||||
|
if hasReport && issueFile != "" {
|
||||||
|
body := buildHealthReportIssueBody(report, healthErr)
|
||||||
|
if err := os.WriteFile(issueFile, []byte(body), 0o644); err != nil {
|
||||||
|
return fmt.Errorf("write issue file: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := writeGitHubOutput(githubOutput, "has_report", strconv.FormatBool(hasReport)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := writeGitHubOutput(githubOutput, "health_ok", strconv.FormatBool(healthOK)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := writeGitHubOutput(githubOutput, "report_ok", strconv.FormatBool(reportOK)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := writeGitHubOutput(githubOutput, "has_errors", strconv.FormatBool(hasErrors)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if healthErr != nil {
|
||||||
|
if err := writeGitHubOutput(githubOutput, "health_error", sanitizeOutputValue(healthErr.Error())); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if reportErr != nil {
|
||||||
|
if err := writeGitHubOutput(githubOutput, "report_error", sanitizeOutputValue(reportErr.Error())); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if healthErr != nil {
|
||||||
|
fmt.Printf("CI health-report health error: %v\n", healthErr)
|
||||||
|
}
|
||||||
|
if reportErr != nil {
|
||||||
|
fmt.Printf("CI health-report report error: %v\n", reportErr)
|
||||||
|
}
|
||||||
|
if hasReport {
|
||||||
|
fmt.Println("CI health-report generated report artifact")
|
||||||
|
} else {
|
||||||
|
fmt.Println("CI health-report has no report artifact")
|
||||||
|
}
|
||||||
|
|
||||||
|
if strict {
|
||||||
|
if healthErr != nil {
|
||||||
|
return healthErr
|
||||||
|
}
|
||||||
|
if reportErr != nil {
|
||||||
|
return reportErr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&issueFile, "issue-file", "health_report.txt", "Path to write health issue markdown body")
|
||||||
|
cmd.Flags().StringVar(&githubOutput, "github-output", "", "Path to GitHub output file (typically $GITHUB_OUTPUT)")
|
||||||
|
cmd.Flags().BoolVar(&strict, "strict", false, "Return non-zero when health/report fails")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
func browseCmd() *cobra.Command {
|
||||||
|
var cachePath string
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "browse",
|
||||||
|
Short: "Interactive TUI browser for awesome-docker resources",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
hc, err := cache.LoadHealthCache(cachePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("load cache: %w", err)
|
||||||
|
}
|
||||||
|
if len(hc.Entries) == 0 {
|
||||||
|
return fmt.Errorf("no cache data; run 'awesome-docker health' first")
|
||||||
|
}
|
||||||
|
return tui.Run(hc.Entries)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
cmd.Flags().StringVar(&cachePath, "cache", healthCachePath, "Path to health cache YAML")
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
18
config/exclude.yaml
Normal file
18
config/exclude.yaml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# URLs or URL prefixes to skip during link checking.
|
||||||
|
# These are known false positives or rate-limited domains.
|
||||||
|
domains:
|
||||||
|
- https://vimeo.com
|
||||||
|
- https://travis-ci.org/veggiemonk/awesome-docker.svg
|
||||||
|
- https://github.com/apps/
|
||||||
|
- https://twitter.com
|
||||||
|
- https://www.meetup.com/
|
||||||
|
- https://cycle.io/
|
||||||
|
- https://www.manning.com/
|
||||||
|
- https://deepfence.io
|
||||||
|
- https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg
|
||||||
|
- https://www.se-radio.net/2017/05/se-radio-episode-290-diogo-monica-on-docker-security
|
||||||
|
- https://www.reddit.com/r/docker/
|
||||||
|
- https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615
|
||||||
|
- https://www.youtube.com/playlist
|
||||||
|
- https://www.aquasec.com
|
||||||
|
- https://cloudsmith.com
|
||||||
3010
config/health_cache.yaml
Normal file
3010
config/health_cache.yaml
Normal file
File diff suppressed because it is too large
Load Diff
725
config/website.tmpl.html
Normal file
725
config/website.tmpl.html
Normal file
@@ -0,0 +1,725 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html class="no-js" lang="en">
|
||||||
|
<head>
|
||||||
|
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
||||||
|
<meta http-equiv="Cache-control" content="public" />
|
||||||
|
<meta charset="UTF-8" />
|
||||||
|
<title>Awesome-docker</title>
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
|
<meta name="theme-color" media="(prefers-color-scheme: light)" content="#0B77B7" />
|
||||||
|
<meta name="theme-color" media="(prefers-color-scheme: dark)" content="#13344C" />
|
||||||
|
<meta name="color-scheme" content="light dark" />
|
||||||
|
<meta
|
||||||
|
name="description"
|
||||||
|
content="A curated list of Docker resources and projects."
|
||||||
|
/>
|
||||||
|
<meta
|
||||||
|
name="keywords"
|
||||||
|
content="free and open-source open source projects for docker moby kubernetes linux awesome awesome-list container tools dockerfile list moby docker-container docker-image docker-environment docker-deployment docker-swarm docker-api docker-monitoring docker-machine docker-security docker-registry"
|
||||||
|
/>
|
||||||
|
<meta
|
||||||
|
name="google-site-verification"
|
||||||
|
content="_yiugvz0gCtfsBLyLl1LnkALXb6D4ofiwCyV1XOlYBM"
|
||||||
|
/>
|
||||||
|
<link rel="icon" type="image/png" href="favicon.png" />
|
||||||
|
<link rel="preconnect" href="https://fonts.googleapis.com" />
|
||||||
|
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
|
||||||
|
<link
|
||||||
|
href="https://fonts.googleapis.com/css2?family=Manrope:wght@400;500;700;800&family=Sora:wght@600;700;800&display=swap"
|
||||||
|
rel="stylesheet"
|
||||||
|
/>
|
||||||
|
<style>
|
||||||
|
:root {
|
||||||
|
--bg: #f3f8fb;
|
||||||
|
--bg-top: #eaf5fb;
|
||||||
|
--bg-bottom: #f6fbff;
|
||||||
|
--bg-spot-1: rgba(200, 232, 248, 1);
|
||||||
|
--bg-spot-2: rgba(213, 240, 255, 1);
|
||||||
|
--surface: #ffffff;
|
||||||
|
--surface-soft: #f7fbfe;
|
||||||
|
--text: #1f2d3d;
|
||||||
|
--muted: #4e6279;
|
||||||
|
--heading: #103a5c;
|
||||||
|
--link: #065f95;
|
||||||
|
--link-hover: #044971;
|
||||||
|
--border: #dbe7f0;
|
||||||
|
--marker: #2f77a8;
|
||||||
|
--hr: #d3e4f0;
|
||||||
|
--focus-ring: #0a67a5;
|
||||||
|
--focus-halo: rgba(10, 103, 165, 0.28);
|
||||||
|
--header-grad-start: #0d4d78;
|
||||||
|
--header-grad-mid: #0b77b7;
|
||||||
|
--header-grad-end: #43a8d8;
|
||||||
|
--header-orb-1: rgba(255, 255, 255, 0.3);
|
||||||
|
--header-orb-2: rgba(84, 195, 245, 0.5);
|
||||||
|
--code-bg: #edf4fa;
|
||||||
|
--code-border: #dce7f0;
|
||||||
|
--code-text: #1f2d3d;
|
||||||
|
--pre-bg: #0e2334;
|
||||||
|
--pre-border: #d8e3ed;
|
||||||
|
--pre-text: #e2edf5;
|
||||||
|
--table-bg: #ffffff;
|
||||||
|
--table-header-bg: #f0f7fc;
|
||||||
|
--toc-bg: linear-gradient(180deg, #f8fcff 0%, #f3f9fd 100%);
|
||||||
|
--toc-title: #214f72;
|
||||||
|
--toc-child-border: #c8deed;
|
||||||
|
--toc-link: #175886;
|
||||||
|
--toc-link-hover: #0f3f61;
|
||||||
|
--toc-link-hover-bg: #e6f2fa;
|
||||||
|
--toc-link-active: #0d3e61;
|
||||||
|
--toc-link-active-bg: #d9ebf8;
|
||||||
|
--toc-link-active-ring: #bcd8ec;
|
||||||
|
--shadow: 0 22px 50px -34px rgba(17, 57, 88, 0.42);
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (prefers-color-scheme: dark) {
|
||||||
|
:root {
|
||||||
|
--bg: #0e1620;
|
||||||
|
--bg-top: #101a25;
|
||||||
|
--bg-bottom: #0d141d;
|
||||||
|
--bg-spot-1: rgba(23, 47, 68, 0.85);
|
||||||
|
--bg-spot-2: rgba(19, 61, 89, 0.62);
|
||||||
|
--surface: #121e2a;
|
||||||
|
--surface-soft: #162634;
|
||||||
|
--text: #d5e4f2;
|
||||||
|
--muted: #9fb7ce;
|
||||||
|
--heading: #f0f7ff;
|
||||||
|
--link: #7cc7f5;
|
||||||
|
--link-hover: #a1d8fb;
|
||||||
|
--border: #2b4257;
|
||||||
|
--marker: #78b6dd;
|
||||||
|
--hr: #284154;
|
||||||
|
--focus-ring: #84cbf8;
|
||||||
|
--focus-halo: rgba(132, 203, 248, 0.32);
|
||||||
|
--header-grad-start: #12344c;
|
||||||
|
--header-grad-mid: #185c86;
|
||||||
|
--header-grad-end: #23759f;
|
||||||
|
--header-orb-1: rgba(133, 198, 242, 0.24);
|
||||||
|
--header-orb-2: rgba(61, 141, 189, 0.36);
|
||||||
|
--code-bg: #1b2b3a;
|
||||||
|
--code-border: #2b4257;
|
||||||
|
--code-text: #d8e8f7;
|
||||||
|
--pre-bg: #0b1622;
|
||||||
|
--pre-border: #2b4257;
|
||||||
|
--pre-text: #dceaf7;
|
||||||
|
--table-bg: #121e2a;
|
||||||
|
--table-header-bg: #1a2b3a;
|
||||||
|
--toc-bg: linear-gradient(180deg, #162736 0%, #132432 100%);
|
||||||
|
--toc-title: #b6d6ee;
|
||||||
|
--toc-child-border: #335067;
|
||||||
|
--toc-link: #a9d4f2;
|
||||||
|
--toc-link-hover: #d7ecfc;
|
||||||
|
--toc-link-hover-bg: #20384b;
|
||||||
|
--toc-link-active: #e6f5ff;
|
||||||
|
--toc-link-active-bg: #294a62;
|
||||||
|
--toc-link-active-ring: #3e6482;
|
||||||
|
--shadow: 0 28px 60px -36px rgba(0, 0, 0, 0.78);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
* {
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
|
||||||
|
html {
|
||||||
|
-ms-text-size-adjust: 100%;
|
||||||
|
-webkit-text-size-adjust: 100%;
|
||||||
|
scroll-behavior: smooth;
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
margin: 0;
|
||||||
|
min-height: 100vh;
|
||||||
|
font-family: "Manrope", "Segoe UI", "Helvetica Neue", Arial, sans-serif;
|
||||||
|
font-size: 16px;
|
||||||
|
line-height: 1.65;
|
||||||
|
color: var(--text);
|
||||||
|
background:
|
||||||
|
radial-gradient(circle at 12% 12%, var(--bg-spot-1) 0, transparent 40%),
|
||||||
|
radial-gradient(circle at 85% 2%, var(--bg-spot-2) 0, transparent 32%),
|
||||||
|
linear-gradient(180deg, var(--bg-top) 0%, var(--bg) 34%, var(--bg-bottom) 100%);
|
||||||
|
}
|
||||||
|
|
||||||
|
a {
|
||||||
|
color: var(--link);
|
||||||
|
text-decoration: none;
|
||||||
|
text-underline-offset: 0.16em;
|
||||||
|
transition: color 140ms ease, text-decoration-color 140ms ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
a:hover {
|
||||||
|
color: var(--link-hover);
|
||||||
|
text-decoration: underline;
|
||||||
|
}
|
||||||
|
|
||||||
|
:where(a, button, input, select, textarea, summary, [tabindex]):focus-visible {
|
||||||
|
outline: 3px solid var(--focus-ring);
|
||||||
|
outline-offset: 3px;
|
||||||
|
box-shadow: 0 0 0 4px var(--focus-halo);
|
||||||
|
border-radius: 7px;
|
||||||
|
}
|
||||||
|
|
||||||
|
strong {
|
||||||
|
font-weight: 800;
|
||||||
|
}
|
||||||
|
|
||||||
|
p {
|
||||||
|
margin: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
img {
|
||||||
|
border: 0;
|
||||||
|
max-width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
svg:not(:root) {
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
.btn {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 0.72rem 1.15rem;
|
||||||
|
color: #ffffff;
|
||||||
|
font-weight: 700;
|
||||||
|
letter-spacing: 0.01em;
|
||||||
|
background: rgba(255, 255, 255, 0.18);
|
||||||
|
border: 1px solid rgba(255, 255, 255, 0.42);
|
||||||
|
border-radius: 10px;
|
||||||
|
box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.28);
|
||||||
|
}
|
||||||
|
|
||||||
|
.btn:hover {
|
||||||
|
color: #ffffff;
|
||||||
|
text-decoration: none;
|
||||||
|
background: rgba(255, 255, 255, 0.3);
|
||||||
|
}
|
||||||
|
|
||||||
|
.page-header {
|
||||||
|
position: relative;
|
||||||
|
overflow: hidden;
|
||||||
|
text-align: center;
|
||||||
|
color: #ffffff;
|
||||||
|
background-image: linear-gradient(128deg, var(--header-grad-start) 5%, var(--header-grad-mid) 57%, var(--header-grad-end) 100%);
|
||||||
|
}
|
||||||
|
|
||||||
|
.page-header::before,
|
||||||
|
.page-header::after {
|
||||||
|
content: "";
|
||||||
|
position: absolute;
|
||||||
|
border-radius: 999px;
|
||||||
|
pointer-events: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.page-header::before {
|
||||||
|
width: 30rem;
|
||||||
|
height: 30rem;
|
||||||
|
right: -7rem;
|
||||||
|
top: -19rem;
|
||||||
|
background: radial-gradient(circle, var(--header-orb-1) 0%, transparent 68%);
|
||||||
|
}
|
||||||
|
|
||||||
|
.page-header::after {
|
||||||
|
width: 28rem;
|
||||||
|
height: 28rem;
|
||||||
|
left: -10rem;
|
||||||
|
bottom: -20rem;
|
||||||
|
background: radial-gradient(circle, var(--header-orb-2) 0%, transparent 70%);
|
||||||
|
}
|
||||||
|
|
||||||
|
.page-header > * {
|
||||||
|
position: relative;
|
||||||
|
z-index: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.project-name {
|
||||||
|
margin: 0 0 0.55rem;
|
||||||
|
font-family: "Sora", "Avenir Next", "Segoe UI", Arial, sans-serif;
|
||||||
|
font-size: clamp(2rem, 4.4vw, 3.4rem);
|
||||||
|
line-height: 1.05;
|
||||||
|
letter-spacing: -0.028em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.project-tagline {
|
||||||
|
margin: 0 auto;
|
||||||
|
max-width: 46rem;
|
||||||
|
color: rgba(255, 255, 255, 0.92);
|
||||||
|
font-size: clamp(1.02rem, 1.8vw, 1.22rem);
|
||||||
|
line-height: 1.45;
|
||||||
|
}
|
||||||
|
|
||||||
|
.header-actions {
|
||||||
|
margin-top: 1.5rem;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
gap: 0.8rem;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site-shell {
|
||||||
|
max-width: 76rem;
|
||||||
|
margin: -2.2rem auto 0;
|
||||||
|
padding: 0 1rem 2.5rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content {
|
||||||
|
background: var(--surface);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 20px;
|
||||||
|
box-shadow: var(--shadow);
|
||||||
|
word-wrap: break-word;
|
||||||
|
overflow-wrap: anywhere;
|
||||||
|
max-width: 72rem;
|
||||||
|
margin: 0 auto;
|
||||||
|
padding: clamp(1.35rem, 1rem + 1.45vw, 2.6rem) clamp(1rem, 0.55rem + 2.15vw, 2.8rem);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > * {
|
||||||
|
max-width: 70ch;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > :first-child {
|
||||||
|
margin-top: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content h1,
|
||||||
|
.main-content h2,
|
||||||
|
.main-content h3,
|
||||||
|
.main-content h4,
|
||||||
|
.main-content h5 {
|
||||||
|
font-family: "Sora", "Avenir Next", "Segoe UI", Arial, sans-serif;
|
||||||
|
line-height: 1.2;
|
||||||
|
letter-spacing: -0.015em;
|
||||||
|
color: var(--heading);
|
||||||
|
scroll-margin-top: 1.1rem;
|
||||||
|
margin: 2rem 0 0.8rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content h1 {
|
||||||
|
font-size: clamp(1.6rem, 2.5vw, 2.2rem);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content h2 {
|
||||||
|
font-size: clamp(1.45rem, 2.2vw, 1.92rem);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content h3 {
|
||||||
|
font-size: clamp(1.3rem, 2vw, 1.62rem);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content h4,
|
||||||
|
.main-content h5 {
|
||||||
|
font-size: clamp(1.15rem, 1.85vw, 1.34rem);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content p,
|
||||||
|
.main-content li {
|
||||||
|
color: var(--text);
|
||||||
|
line-height: 1.76;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content p + p {
|
||||||
|
margin-top: 0.95rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content ul,
|
||||||
|
.main-content ol {
|
||||||
|
padding-left: 1.4rem;
|
||||||
|
margin: 0.62rem 0 1.2rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content li + li {
|
||||||
|
margin-top: 0.34rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content ul li::marker {
|
||||||
|
color: var(--marker);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content hr {
|
||||||
|
max-width: 100%;
|
||||||
|
border: 0;
|
||||||
|
height: 1px;
|
||||||
|
margin: 2.2rem 0;
|
||||||
|
background: linear-gradient(90deg, transparent, var(--hr) 18%, var(--hr) 82%, transparent);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content blockquote {
|
||||||
|
margin: 1.3rem 0;
|
||||||
|
padding: 0.9rem 1.15rem;
|
||||||
|
border-left: 4px solid #68b0da;
|
||||||
|
border-radius: 0 12px 12px 0;
|
||||||
|
background: var(--surface-soft);
|
||||||
|
color: var(--muted);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content blockquote > :first-child {
|
||||||
|
margin-top: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content blockquote > :last-child {
|
||||||
|
margin-bottom: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content code {
|
||||||
|
font-family: "SFMono-Regular", Menlo, Consolas, "Liberation Mono", monospace;
|
||||||
|
font-size: 0.88em;
|
||||||
|
background: var(--code-bg);
|
||||||
|
border: 1px solid var(--code-border);
|
||||||
|
border-radius: 6px;
|
||||||
|
padding: 0.08em 0.38em;
|
||||||
|
color: var(--code-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content pre {
|
||||||
|
max-width: 100%;
|
||||||
|
overflow: auto;
|
||||||
|
border: 1px solid var(--pre-border);
|
||||||
|
border-radius: 12px;
|
||||||
|
background: var(--pre-bg);
|
||||||
|
box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.06);
|
||||||
|
padding: 0.9rem 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content pre code {
|
||||||
|
padding: 0;
|
||||||
|
border: 0;
|
||||||
|
background: transparent;
|
||||||
|
color: var(--pre-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content table {
|
||||||
|
width: 100%;
|
||||||
|
max-width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 12px;
|
||||||
|
overflow: hidden;
|
||||||
|
background: var(--table-bg);
|
||||||
|
margin: 1.2rem 0 1.6rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content th,
|
||||||
|
.main-content td {
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
padding: 0.52rem 0.68rem;
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content th {
|
||||||
|
background: var(--table-header-bg);
|
||||||
|
color: var(--heading);
|
||||||
|
font-weight: 700;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type {
|
||||||
|
list-style: none;
|
||||||
|
max-width: 100%;
|
||||||
|
padding: 0.95rem 1.05rem 1rem;
|
||||||
|
margin: 1rem 0 1.8rem;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 14px;
|
||||||
|
background: var(--toc-bg);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type::before {
|
||||||
|
content: "Contents";
|
||||||
|
display: block;
|
||||||
|
margin-bottom: 0.65rem;
|
||||||
|
font-family: "Sora", "Avenir Next", "Segoe UI", Arial, sans-serif;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
font-weight: 700;
|
||||||
|
letter-spacing: 0.01em;
|
||||||
|
text-transform: uppercase;
|
||||||
|
color: var(--toc-title);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type li {
|
||||||
|
margin: 0.22rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type ul {
|
||||||
|
margin: 0.25rem 0 0.55rem 0.48rem;
|
||||||
|
padding-left: 0.58rem;
|
||||||
|
border-left: 1px solid var(--toc-child-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type a {
|
||||||
|
display: block;
|
||||||
|
padding: 0.2rem 0.46rem;
|
||||||
|
border-radius: 8px;
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--toc-link);
|
||||||
|
transition: color 170ms ease, background-color 180ms ease, box-shadow 180ms ease, transform 220ms cubic-bezier(0.2, 0.8, 0.2, 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type a:hover {
|
||||||
|
color: var(--toc-link-hover);
|
||||||
|
text-decoration: none;
|
||||||
|
background: var(--toc-link-hover-bg);
|
||||||
|
transform: translateX(2px);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type a.is-active {
|
||||||
|
color: var(--toc-link-active);
|
||||||
|
background: var(--toc-link-active-bg);
|
||||||
|
box-shadow: inset 0 0 0 1px var(--toc-link-active-ring);
|
||||||
|
transform: translateX(3px);
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type li ul a {
|
||||||
|
font-weight: 600;
|
||||||
|
font-size: 0.92rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content img {
|
||||||
|
word-wrap: break-word;
|
||||||
|
height: auto;
|
||||||
|
}
|
||||||
|
|
||||||
|
@keyframes hero-fade-in {
|
||||||
|
from {
|
||||||
|
opacity: 0;
|
||||||
|
transform: translateY(12px);
|
||||||
|
}
|
||||||
|
to {
|
||||||
|
opacity: 1;
|
||||||
|
transform: translateY(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@keyframes panel-rise-in {
|
||||||
|
from {
|
||||||
|
opacity: 0;
|
||||||
|
transform: translateY(14px);
|
||||||
|
}
|
||||||
|
to {
|
||||||
|
opacity: 1;
|
||||||
|
transform: translateY(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
.page-header > * {
|
||||||
|
animation: hero-fade-in 540ms cubic-bezier(0.21, 0.76, 0.26, 1) both;
|
||||||
|
}
|
||||||
|
|
||||||
|
.project-tagline {
|
||||||
|
animation-delay: 90ms;
|
||||||
|
}
|
||||||
|
|
||||||
|
.header-actions {
|
||||||
|
animation-delay: 170ms;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content {
|
||||||
|
animation: panel-rise-in 620ms cubic-bezier(0.18, 0.74, 0.24, 1) 130ms both;
|
||||||
|
}
|
||||||
|
|
||||||
|
@media screen and (min-width: 54em) {
|
||||||
|
.page-header {
|
||||||
|
padding: 4.65rem 2rem 5.05rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: minmax(15rem, 19rem) minmax(0, 1fr);
|
||||||
|
column-gap: 2rem;
|
||||||
|
row-gap: 0;
|
||||||
|
align-items: start;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > * {
|
||||||
|
grid-column: 2;
|
||||||
|
width: min(100%, 70ch);
|
||||||
|
max-width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type {
|
||||||
|
grid-column: 1;
|
||||||
|
grid-row: 1 / span 999;
|
||||||
|
width: 100%;
|
||||||
|
margin: 0;
|
||||||
|
position: sticky;
|
||||||
|
top: 1rem;
|
||||||
|
max-height: calc(100vh - 2rem);
|
||||||
|
overflow: auto;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type + * {
|
||||||
|
margin-top: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > table,
|
||||||
|
.main-content > pre {
|
||||||
|
width: 100%;
|
||||||
|
max-width: 100%;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@media screen and (max-width: 54em) {
|
||||||
|
.page-header {
|
||||||
|
padding: 3.1rem 1rem 4.2rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.site-shell {
|
||||||
|
margin-top: -1.7rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content > ul:first-of-type {
|
||||||
|
max-height: none;
|
||||||
|
position: static;
|
||||||
|
}
|
||||||
|
|
||||||
|
.header-actions {
|
||||||
|
gap: 0.65rem;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@media screen and (max-width: 34em) {
|
||||||
|
.btn {
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.main-content {
|
||||||
|
border-radius: 14px;
|
||||||
|
padding-top: 1.1rem;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (prefers-reduced-motion: reduce) {
|
||||||
|
* {
|
||||||
|
scroll-behavior: auto;
|
||||||
|
transition: none !important;
|
||||||
|
animation: none !important;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
<body>
|
||||||
|
<section class="page-header">
|
||||||
|
<h1 class="project-name">Awesome-docker</h1>
|
||||||
|
<p class="project-tagline">
|
||||||
|
A curated list of Docker resources and projects
|
||||||
|
</p>
|
||||||
|
<div class="header-actions">
|
||||||
|
<a href="https://github.com/veggiemonk/awesome-docker" class="btn"
|
||||||
|
>View on GitHub</a
|
||||||
|
>
|
||||||
|
<a
|
||||||
|
class="github-button"
|
||||||
|
href="https://github.com/veggiemonk/awesome-docker#readme"
|
||||||
|
data-icon="octicon-star"
|
||||||
|
data-size="large"
|
||||||
|
data-count-href="/veggiemonk/awesome-docker/stargazers"
|
||||||
|
data-show-count="true"
|
||||||
|
data-count-aria-label="# stargazers on GitHub"
|
||||||
|
aria-label="Star veggiemonk/awesome-docker on GitHub"
|
||||||
|
>Star</a
|
||||||
|
>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
<main class="site-shell">
|
||||||
|
<section id="md" class="main-content"></section>
|
||||||
|
</main>
|
||||||
|
<script async defer src="https://buttons.github.io/buttons.js"></script>
|
||||||
|
<script>
|
||||||
|
(function () {
|
||||||
|
var toc = document.querySelector("#md > ul:first-of-type");
|
||||||
|
if (!toc) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var links = Array.prototype.slice.call(toc.querySelectorAll('a[href^="#"]'));
|
||||||
|
if (links.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var linkById = {};
|
||||||
|
links.forEach(function (link) {
|
||||||
|
var href = link.getAttribute("href");
|
||||||
|
if (!href || href.length < 2) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
var id = href.slice(1);
|
||||||
|
try {
|
||||||
|
id = decodeURIComponent(id);
|
||||||
|
} catch (_) {}
|
||||||
|
if (id) {
|
||||||
|
linkById[id] = link;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
var headings = Array.prototype.filter.call(
|
||||||
|
document.querySelectorAll("#md h1[id], #md h2[id], #md h3[id], #md h4[id], #md h5[id]"),
|
||||||
|
function (heading) {
|
||||||
|
return Boolean(linkById[heading.id]);
|
||||||
|
}
|
||||||
|
);
|
||||||
|
if (headings.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
function setActive(id) {
|
||||||
|
if (!id || !linkById[id]) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
links.forEach(function (link) {
|
||||||
|
link.classList.remove("is-active");
|
||||||
|
link.removeAttribute("aria-current");
|
||||||
|
});
|
||||||
|
linkById[id].classList.add("is-active");
|
||||||
|
linkById[id].setAttribute("aria-current", "true");
|
||||||
|
}
|
||||||
|
|
||||||
|
function setActiveFromHash() {
|
||||||
|
if (!window.location.hash || window.location.hash.length < 2) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
var id = window.location.hash.slice(1);
|
||||||
|
try {
|
||||||
|
id = decodeURIComponent(id);
|
||||||
|
} catch (_) {}
|
||||||
|
if (linkById[id]) {
|
||||||
|
setActive(id);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
function setActiveFromViewport() {
|
||||||
|
var current = headings[0];
|
||||||
|
for (var i = 0; i < headings.length; i += 1) {
|
||||||
|
if (headings[i].getBoundingClientRect().top <= 150) {
|
||||||
|
current = headings[i];
|
||||||
|
} else {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
setActive(current.id);
|
||||||
|
}
|
||||||
|
|
||||||
|
var ticking = false;
|
||||||
|
window.addEventListener(
|
||||||
|
"scroll",
|
||||||
|
function () {
|
||||||
|
if (ticking) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
ticking = true;
|
||||||
|
window.requestAnimationFrame(function () {
|
||||||
|
setActiveFromViewport();
|
||||||
|
ticking = false;
|
||||||
|
});
|
||||||
|
},
|
||||||
|
{ passive: true }
|
||||||
|
);
|
||||||
|
|
||||||
|
window.addEventListener("hashchange", setActiveFromHash);
|
||||||
|
if (!setActiveFromHash()) {
|
||||||
|
setActiveFromViewport();
|
||||||
|
}
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
34
go.mod
Normal file
34
go.mod
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
module github.com/veggiemonk/awesome-docker
|
||||||
|
|
||||||
|
go 1.26.0
|
||||||
|
|
||||||
|
require (
|
||||||
|
charm.land/bubbletea/v2 v2.0.2
|
||||||
|
charm.land/lipgloss/v2 v2.0.2
|
||||||
|
github.com/shurcooL/githubv4 v0.0.0-20260209031235-2402fdf4a9ed
|
||||||
|
github.com/spf13/cobra v1.10.2
|
||||||
|
github.com/yuin/goldmark v1.7.16
|
||||||
|
golang.org/x/oauth2 v0.36.0
|
||||||
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
|
)
|
||||||
|
|
||||||
|
require (
|
||||||
|
github.com/charmbracelet/colorprofile v0.4.3 // indirect
|
||||||
|
github.com/charmbracelet/ultraviolet v0.0.0-20260309091805-903bfd0cf188 // indirect
|
||||||
|
github.com/charmbracelet/x/ansi v0.11.6 // indirect
|
||||||
|
github.com/charmbracelet/x/term v0.2.2 // indirect
|
||||||
|
github.com/charmbracelet/x/termios v0.1.1 // indirect
|
||||||
|
github.com/charmbracelet/x/windows v0.2.2 // indirect
|
||||||
|
github.com/clipperhouse/displaywidth v0.11.0 // indirect
|
||||||
|
github.com/clipperhouse/uax29/v2 v2.7.0 // indirect
|
||||||
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
|
github.com/lucasb-eyer/go-colorful v1.3.0 // indirect
|
||||||
|
github.com/mattn/go-runewidth v0.0.21 // indirect
|
||||||
|
github.com/muesli/cancelreader v0.2.2 // indirect
|
||||||
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
|
github.com/shurcooL/graphql v0.0.0-20240915155400-7ee5256398cf // indirect
|
||||||
|
github.com/spf13/pflag v1.0.10 // indirect
|
||||||
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
|
||||||
|
golang.org/x/sync v0.20.0 // indirect
|
||||||
|
golang.org/x/sys v0.42.0 // indirect
|
||||||
|
)
|
||||||
79
go.sum
Normal file
79
go.sum
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
charm.land/bubbletea/v2 v2.0.1 h1:B8e9zzK7x9JJ+XvHGF4xnYu9Xa0E0y0MyggY6dbaCfQ=
|
||||||
|
charm.land/bubbletea/v2 v2.0.1/go.mod h1:3LRff2U4WIYXy7MTxfbAQ+AdfM3D8Xuvz2wbsOD9OHQ=
|
||||||
|
charm.land/bubbletea/v2 v2.0.2 h1:4CRtRnuZOdFDTWSff9r8QFt/9+z6Emubz3aDMnf/dx0=
|
||||||
|
charm.land/bubbletea/v2 v2.0.2/go.mod h1:3LRff2U4WIYXy7MTxfbAQ+AdfM3D8Xuvz2wbsOD9OHQ=
|
||||||
|
charm.land/lipgloss/v2 v2.0.0 h1:sd8N/B3x892oiOjFfBQdXBQp3cAkvjGaU5TvVZC3ivo=
|
||||||
|
charm.land/lipgloss/v2 v2.0.0/go.mod h1:w6SnmsBFBmEFBodiEDurGS/sdUY/u1+v72DqUzc6J14=
|
||||||
|
charm.land/lipgloss/v2 v2.0.2 h1:xFolbF8JdpNkM2cEPTfXEcW1p6NRzOWTSamRfYEw8cs=
|
||||||
|
charm.land/lipgloss/v2 v2.0.2/go.mod h1:KjPle2Qd3YmvP1KL5OMHiHysGcNwq6u83MUjYkFvEkM=
|
||||||
|
github.com/aymanbagabas/go-udiff v0.4.0 h1:TKnLPh7IbnizJIBKFWa9mKayRUBQ9Kh1BPCk6w2PnYM=
|
||||||
|
github.com/aymanbagabas/go-udiff v0.4.0/go.mod h1:0L9PGwj20lrtmEMeyw4WKJ/TMyDtvAoK9bf2u/mNo3w=
|
||||||
|
github.com/aymanbagabas/go-udiff v0.4.1 h1:OEIrQ8maEeDBXQDoGCbbTTXYJMYRCRO1fnodZ12Gv5o=
|
||||||
|
github.com/charmbracelet/colorprofile v0.4.2 h1:BdSNuMjRbotnxHSfxy+PCSa4xAmz7szw70ktAtWRYrY=
|
||||||
|
github.com/charmbracelet/colorprofile v0.4.2/go.mod h1:0rTi81QpwDElInthtrQ6Ni7cG0sDtwAd4C4le060fT8=
|
||||||
|
github.com/charmbracelet/colorprofile v0.4.3 h1:QPa1IWkYI+AOB+fE+mg/5/4HRMZcaXex9t5KX76i20Q=
|
||||||
|
github.com/charmbracelet/colorprofile v0.4.3/go.mod h1:/zT4BhpD5aGFpqQQqw7a+VtHCzu+zrQtt1zhMt9mR4Q=
|
||||||
|
github.com/charmbracelet/ultraviolet v0.0.0-20260205113103-524a6607adb8 h1:eyFRbAmexyt43hVfeyBofiGSEmJ7krjLOYt/9CF5NKA=
|
||||||
|
github.com/charmbracelet/ultraviolet v0.0.0-20260205113103-524a6607adb8/go.mod h1:SQpCTRNBtzJkwku5ye4S3HEuthAlGy2n9VXZnWkEW98=
|
||||||
|
github.com/charmbracelet/ultraviolet v0.0.0-20260309091805-903bfd0cf188 h1:J8v4kWJYCaxv1SLhLunN74S+jMteZ1f7Dae99ioq4Bo=
|
||||||
|
github.com/charmbracelet/ultraviolet v0.0.0-20260309091805-903bfd0cf188/go.mod h1:FzWNAbe1jEmI+GZljSnlaSA8wJjnNIZhWBLkTsAl6eg=
|
||||||
|
github.com/charmbracelet/x/ansi v0.11.6 h1:GhV21SiDz/45W9AnV2R61xZMRri5NlLnl6CVF7ihZW8=
|
||||||
|
github.com/charmbracelet/x/ansi v0.11.6/go.mod h1:2JNYLgQUsyqaiLovhU2Rv/pb8r6ydXKS3NIttu3VGZQ=
|
||||||
|
github.com/charmbracelet/x/exp/golden v0.0.0-20250806222409-83e3a29d542f h1:pk6gmGpCE7F3FcjaOEKYriCvpmIN4+6OS/RD0vm4uIA=
|
||||||
|
github.com/charmbracelet/x/exp/golden v0.0.0-20250806222409-83e3a29d542f/go.mod h1:IfZAMTHB6XkZSeXUqriemErjAWCCzT0LwjKFYCZyw0I=
|
||||||
|
github.com/charmbracelet/x/term v0.2.2 h1:xVRT/S2ZcKdhhOuSP4t5cLi5o+JxklsoEObBSgfgZRk=
|
||||||
|
github.com/charmbracelet/x/term v0.2.2/go.mod h1:kF8CY5RddLWrsgVwpw4kAa6TESp6EB5y3uxGLeCqzAI=
|
||||||
|
github.com/charmbracelet/x/termios v0.1.1 h1:o3Q2bT8eqzGnGPOYheoYS8eEleT5ZVNYNy8JawjaNZY=
|
||||||
|
github.com/charmbracelet/x/termios v0.1.1/go.mod h1:rB7fnv1TgOPOyyKRJ9o+AsTU/vK5WHJ2ivHeut/Pcwo=
|
||||||
|
github.com/charmbracelet/x/windows v0.2.2 h1:IofanmuvaxnKHuV04sC0eBy/smG6kIKrWG2/jYn2GuM=
|
||||||
|
github.com/charmbracelet/x/windows v0.2.2/go.mod h1:/8XtdKZzedat74NQFn0NGlGL4soHB0YQZrETF96h75k=
|
||||||
|
github.com/clipperhouse/displaywidth v0.11.0 h1:lBc6kY44VFw+TDx4I8opi/EtL9m20WSEFgwIwO+UVM8=
|
||||||
|
github.com/clipperhouse/displaywidth v0.11.0/go.mod h1:bkrFNkf81G8HyVqmKGxsPufD3JhNl3dSqnGhOoSD/o0=
|
||||||
|
github.com/clipperhouse/uax29/v2 v2.7.0 h1:+gs4oBZ2gPfVrKPthwbMzWZDaAFPGYK72F0NJv2v7Vk=
|
||||||
|
github.com/clipperhouse/uax29/v2 v2.7.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM=
|
||||||
|
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||||
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
|
github.com/lucasb-eyer/go-colorful v1.3.0 h1:2/yBRLdWBZKrf7gB40FoiKfAWYQ0lqNcbuQwVHXptag=
|
||||||
|
github.com/lucasb-eyer/go-colorful v1.3.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
|
||||||
|
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
|
||||||
|
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
|
||||||
|
github.com/mattn/go-runewidth v0.0.21 h1:jJKAZiQH+2mIinzCJIaIG9Be1+0NR+5sz/lYEEjdM8w=
|
||||||
|
github.com/mattn/go-runewidth v0.0.21/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
|
||||||
|
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
|
||||||
|
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
|
||||||
|
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||||
|
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||||
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
|
github.com/shurcooL/githubv4 v0.0.0-20260209031235-2402fdf4a9ed h1:KT7hI8vYXgU0s2qaMkrfq9tCA1w/iEPgfredVP+4Tzw=
|
||||||
|
github.com/shurcooL/githubv4 v0.0.0-20260209031235-2402fdf4a9ed/go.mod h1:zqMwyHmnN/eDOZOdiTohqIUKUrTFX62PNlu7IJdu0q8=
|
||||||
|
github.com/shurcooL/graphql v0.0.0-20240915155400-7ee5256398cf h1:o1uxfymjZ7jZ4MsgCErcwWGtVKSiNAXtS59Lhs6uI/g=
|
||||||
|
github.com/shurcooL/graphql v0.0.0-20240915155400-7ee5256398cf/go.mod h1:9dIRpgIY7hVhoqfe0/FcYp0bpInZaT7dc3BYOprrIUE=
|
||||||
|
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
|
||||||
|
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
|
||||||
|
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
|
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
||||||
|
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
|
||||||
|
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
|
||||||
|
github.com/yuin/goldmark v1.7.16 h1:n+CJdUxaFMiDUNnWC3dMWCIQJSkxH4uz3ZwQBkAlVNE=
|
||||||
|
github.com/yuin/goldmark v1.7.16/go.mod h1:ip/1k0VRfGynBgxOz0yCqHrbZXhcjxyuS66Brc7iBKg=
|
||||||
|
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||||
|
golang.org/x/exp v0.0.0-20231006140011-7918f672742d h1:jtJma62tbqLibJ5sFQz8bKtEM8rJBtfilJ2qTU199MI=
|
||||||
|
golang.org/x/exp v0.0.0-20231006140011-7918f672742d/go.mod h1:ldy0pHrwJyGW56pPQzzkH36rKxoZW1tw7ZJpeKx+hdo=
|
||||||
|
golang.org/x/oauth2 v0.35.0 h1:Mv2mzuHuZuY2+bkyWXIHMfhNdJAdwW3FuWeCPYN5GVQ=
|
||||||
|
golang.org/x/oauth2 v0.35.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
|
golang.org/x/oauth2 v0.36.0 h1:peZ/1z27fi9hUOFCAZaHyrpWG5lwe0RJEEEeH0ThlIs=
|
||||||
|
golang.org/x/oauth2 v0.36.0/go.mod h1:YDBUJMTkDnJS+A4BP4eZBjCqtokkg1hODuPjwiGPO7Q=
|
||||||
|
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||||
|
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
|
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
|
||||||
|
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
||||||
|
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
|
||||||
|
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
|
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
||||||
|
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
@@ -4,10 +4,10 @@
|
|||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<meta http-equiv="X-UA-Compatible" content="ie=edge">
|
<meta http-equiv="X-UA-Compatible" content="ie=edge">
|
||||||
<meta HTTP-EQUIV="REFRESH" content="0; url=https://awesome-docker.netlify.com">
|
<meta http-equiv="refresh" content="0; url=website/">
|
||||||
<title>Awesome-docker</title>
|
<title>Awesome-docker</title>
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p> <a href="https://awesome-docker.netlify.com/">We moved to a new place, click here to be redirected.</a></p>
|
<p><a href="website/">Redirecting to the generated site.</a></p>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
71
internal/builder/builder.go
Normal file
71
internal/builder/builder.go
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
package builder
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/yuin/goldmark"
|
||||||
|
"github.com/yuin/goldmark/extension"
|
||||||
|
"github.com/yuin/goldmark/parser"
|
||||||
|
"github.com/yuin/goldmark/renderer/html"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Build converts a Markdown file to HTML using a template.
|
||||||
|
// The template must contain a placeholder element that will be replaced with the rendered content.
|
||||||
|
func Build(markdownPath, templatePath, outputPath string) error {
|
||||||
|
md, err := os.ReadFile(markdownPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read markdown: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tmpl, err := os.ReadFile(templatePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert markdown to HTML
|
||||||
|
gm := goldmark.New(
|
||||||
|
goldmark.WithExtensions(extension.GFM),
|
||||||
|
goldmark.WithParserOptions(parser.WithAutoHeadingID()),
|
||||||
|
goldmark.WithRendererOptions(html.WithUnsafe()),
|
||||||
|
)
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := gm.Convert(md, &buf); err != nil {
|
||||||
|
return fmt.Errorf("convert markdown: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Inject into template — support both placeholder formats
|
||||||
|
output := string(tmpl)
|
||||||
|
replacements := []struct {
|
||||||
|
old string
|
||||||
|
new string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
old: `<div id="md"></div>`,
|
||||||
|
new: `<div id="md">` + buf.String() + `</div>`,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
old: `<section id="md" class="main-content"></section>`,
|
||||||
|
new: `<section id="md" class="main-content">` + buf.String() + `</section>`,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
replaced := false
|
||||||
|
for _, r := range replacements {
|
||||||
|
if strings.Contains(output, r.old) {
|
||||||
|
output = strings.Replace(output, r.old, r.new, 1)
|
||||||
|
replaced = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !replaced {
|
||||||
|
return fmt.Errorf("template missing supported markdown placeholder")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.WriteFile(outputPath, []byte(output), 0o644); err != nil {
|
||||||
|
return fmt.Errorf("write output: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
172
internal/builder/builder_test.go
Normal file
172
internal/builder/builder_test.go
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
package builder
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestBuild(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
|
||||||
|
md := "# Test List\n\n- [Example](https://example.com) - A test entry.\n"
|
||||||
|
mdPath := filepath.Join(dir, "README.md")
|
||||||
|
if err := os.WriteFile(mdPath, []byte(md), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tmpl := `<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<body>
|
||||||
|
<div id="md"></div>
|
||||||
|
</body>
|
||||||
|
</html>`
|
||||||
|
tmplPath := filepath.Join(dir, "template.html")
|
||||||
|
if err := os.WriteFile(tmplPath, []byte(tmpl), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
outPath := filepath.Join(dir, "index.html")
|
||||||
|
if err := Build(mdPath, tmplPath, outPath); err != nil {
|
||||||
|
t.Fatalf("Build failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
content, err := os.ReadFile(outPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
html := string(content)
|
||||||
|
if !strings.Contains(html, "Test List") {
|
||||||
|
t.Error("expected 'Test List' in output")
|
||||||
|
}
|
||||||
|
if !strings.Contains(html, "https://example.com") {
|
||||||
|
t.Error("expected link in output")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildWithSectionPlaceholder(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
|
||||||
|
md := "# Hello\n\nWorld.\n"
|
||||||
|
mdPath := filepath.Join(dir, "README.md")
|
||||||
|
if err := os.WriteFile(mdPath, []byte(md), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// This matches the actual template format
|
||||||
|
tmpl := `<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<body>
|
||||||
|
<section id="md" class="main-content"></section>
|
||||||
|
</body>
|
||||||
|
</html>`
|
||||||
|
tmplPath := filepath.Join(dir, "template.html")
|
||||||
|
if err := os.WriteFile(tmplPath, []byte(tmpl), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
outPath := filepath.Join(dir, "index.html")
|
||||||
|
if err := Build(mdPath, tmplPath, outPath); err != nil {
|
||||||
|
t.Fatalf("Build failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
content, err := os.ReadFile(outPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(string(content), "Hello") {
|
||||||
|
t.Error("expected 'Hello' in output")
|
||||||
|
}
|
||||||
|
if !strings.Contains(string(content), `class="main-content"`) {
|
||||||
|
t.Error("expected section class preserved")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildRealREADME(t *testing.T) {
|
||||||
|
mdPath := "../../README.md"
|
||||||
|
tmplPath := "../../config/website.tmpl.html"
|
||||||
|
if _, err := os.Stat(mdPath); err != nil {
|
||||||
|
t.Skip("README.md not found")
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(tmplPath); err != nil {
|
||||||
|
t.Skip("website template not found")
|
||||||
|
}
|
||||||
|
|
||||||
|
dir := t.TempDir()
|
||||||
|
outPath := filepath.Join(dir, "index.html")
|
||||||
|
|
||||||
|
if err := Build(mdPath, tmplPath, outPath); err != nil {
|
||||||
|
t.Fatalf("Build failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
info, err := os.Stat(outPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if info.Size() < 10000 {
|
||||||
|
t.Errorf("output too small: %d bytes", info.Size())
|
||||||
|
}
|
||||||
|
t.Logf("Generated %d bytes", info.Size())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildFailsWithoutPlaceholder(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
|
||||||
|
mdPath := filepath.Join(dir, "README.md")
|
||||||
|
if err := os.WriteFile(mdPath, []byte("# Title\n"), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tmplPath := filepath.Join(dir, "template.html")
|
||||||
|
if err := os.WriteFile(tmplPath, []byte("<html><body><main></main></body></html>"), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
outPath := filepath.Join(dir, "index.html")
|
||||||
|
err := Build(mdPath, tmplPath, outPath)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("expected Build to fail when template has no supported placeholder")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildAddsHeadingIDs(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
|
||||||
|
md := "# Getting Started\n\n## Next Step\n"
|
||||||
|
mdPath := filepath.Join(dir, "README.md")
|
||||||
|
if err := os.WriteFile(mdPath, []byte(md), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tmpl := `<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<body>
|
||||||
|
<section id="md" class="main-content"></section>
|
||||||
|
</body>
|
||||||
|
</html>`
|
||||||
|
tmplPath := filepath.Join(dir, "template.html")
|
||||||
|
if err := os.WriteFile(tmplPath, []byte(tmpl), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
outPath := filepath.Join(dir, "index.html")
|
||||||
|
if err := Build(mdPath, tmplPath, outPath); err != nil {
|
||||||
|
t.Fatalf("Build failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
content, err := os.ReadFile(outPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
html := string(content)
|
||||||
|
if !strings.Contains(html, `id="getting-started"`) {
|
||||||
|
t.Error("expected auto-generated heading id for h1")
|
||||||
|
}
|
||||||
|
if !strings.Contains(html, `id="next-step"`) {
|
||||||
|
t.Error("expected auto-generated heading id for h2")
|
||||||
|
}
|
||||||
|
}
|
||||||
98
internal/cache/cache.go
vendored
Normal file
98
internal/cache/cache.go
vendored
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
package cache
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ExcludeList holds URL prefixes to skip during checking.
|
||||||
|
type ExcludeList struct {
|
||||||
|
Domains []string `yaml:"domains"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsExcluded returns true if the URL starts with any excluded prefix.
|
||||||
|
func (e *ExcludeList) IsExcluded(url string) bool {
|
||||||
|
for _, d := range e.Domains {
|
||||||
|
if strings.HasPrefix(url, d) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadExcludeList reads an exclude.yaml file.
|
||||||
|
func LoadExcludeList(path string) (*ExcludeList, error) {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var excl ExcludeList
|
||||||
|
if err := yaml.Unmarshal(data, &excl); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &excl, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// HealthEntry stores metadata about a single entry.
|
||||||
|
type HealthEntry struct {
|
||||||
|
URL string `yaml:"url"`
|
||||||
|
Name string `yaml:"name"`
|
||||||
|
Status string `yaml:"status"` // healthy, inactive, stale, archived, dead
|
||||||
|
Stars int `yaml:"stars,omitempty"`
|
||||||
|
Forks int `yaml:"forks,omitempty"`
|
||||||
|
LastPush time.Time `yaml:"last_push,omitempty"`
|
||||||
|
HasLicense bool `yaml:"has_license,omitempty"`
|
||||||
|
HasReadme bool `yaml:"has_readme,omitempty"`
|
||||||
|
CheckedAt time.Time `yaml:"checked_at"`
|
||||||
|
Category string `yaml:"category,omitempty"`
|
||||||
|
Description string `yaml:"description,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// HealthCache is the full YAML cache file.
|
||||||
|
type HealthCache struct {
|
||||||
|
Entries []HealthEntry `yaml:"entries"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadHealthCache reads a health_cache.yaml file. Returns empty cache if file doesn't exist.
|
||||||
|
func LoadHealthCache(path string) (*HealthCache, error) {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
return &HealthCache{}, nil
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var hc HealthCache
|
||||||
|
if err := yaml.Unmarshal(data, &hc); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &hc, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveHealthCache writes the cache to a YAML file.
|
||||||
|
func SaveHealthCache(path string, hc *HealthCache) error {
|
||||||
|
data, err := yaml.Marshal(hc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return os.WriteFile(path, data, 0o644)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Merge updates the cache with new entries, replacing existing ones by URL.
|
||||||
|
func (hc *HealthCache) Merge(entries []HealthEntry) {
|
||||||
|
index := make(map[string]int)
|
||||||
|
for i, e := range hc.Entries {
|
||||||
|
index[e.URL] = i
|
||||||
|
}
|
||||||
|
for _, e := range entries {
|
||||||
|
if i, exists := index[e.URL]; exists {
|
||||||
|
hc.Entries[i] = e
|
||||||
|
} else {
|
||||||
|
index[e.URL] = len(hc.Entries)
|
||||||
|
hc.Entries = append(hc.Entries, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
137
internal/cache/cache_test.go
vendored
Normal file
137
internal/cache/cache_test.go
vendored
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
package cache
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestLoadExcludeList(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
path := filepath.Join(dir, "exclude.yaml")
|
||||||
|
content := `domains:
|
||||||
|
- https://example.com
|
||||||
|
- https://test.org
|
||||||
|
`
|
||||||
|
if err := os.WriteFile(path, []byte(content), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
excl, err := LoadExcludeList(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if len(excl.Domains) != 2 {
|
||||||
|
t.Errorf("domains count = %d, want 2", len(excl.Domains))
|
||||||
|
}
|
||||||
|
if !excl.IsExcluded("https://example.com/foo") {
|
||||||
|
t.Error("expected https://example.com/foo to be excluded")
|
||||||
|
}
|
||||||
|
if excl.IsExcluded("https://other.com") {
|
||||||
|
t.Error("expected https://other.com to NOT be excluded")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHealthCacheRoundTrip(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
path := filepath.Join(dir, "health.yaml")
|
||||||
|
|
||||||
|
original := &HealthCache{
|
||||||
|
Entries: []HealthEntry{
|
||||||
|
{
|
||||||
|
URL: "https://github.com/example/repo",
|
||||||
|
Name: "Example",
|
||||||
|
Status: "healthy",
|
||||||
|
Stars: 42,
|
||||||
|
LastPush: time.Date(2026, 1, 15, 0, 0, 0, 0, time.UTC),
|
||||||
|
HasLicense: true,
|
||||||
|
HasReadme: true,
|
||||||
|
CheckedAt: time.Date(2026, 2, 27, 9, 0, 0, 0, time.UTC),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := SaveHealthCache(path, original); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
loaded, err := LoadHealthCache(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if len(loaded.Entries) != 1 {
|
||||||
|
t.Fatalf("entries = %d, want 1", len(loaded.Entries))
|
||||||
|
}
|
||||||
|
if loaded.Entries[0].Stars != 42 {
|
||||||
|
t.Errorf("stars = %d, want 42", loaded.Entries[0].Stars)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadHealthCacheMissing(t *testing.T) {
|
||||||
|
hc, err := LoadHealthCache("/nonexistent/path.yaml")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if len(hc.Entries) != 0 {
|
||||||
|
t.Errorf("entries = %d, want 0 for missing file", len(hc.Entries))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLoadHealthCacheInvalidYAML(t *testing.T) {
|
||||||
|
dir := t.TempDir()
|
||||||
|
path := filepath.Join(dir, "health.yaml")
|
||||||
|
if err := os.WriteFile(path, []byte("entries:\n - url: [not yaml"), 0o644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
hc, err := LoadHealthCache(path)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("expected error for invalid YAML")
|
||||||
|
}
|
||||||
|
if hc != nil {
|
||||||
|
t.Fatal("expected nil cache on invalid YAML")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMerge(t *testing.T) {
|
||||||
|
hc := &HealthCache{
|
||||||
|
Entries: []HealthEntry{
|
||||||
|
{URL: "https://github.com/a/a", Name: "A", Stars: 10},
|
||||||
|
{URL: "https://github.com/b/b", Name: "B", Stars: 20},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
hc.Merge([]HealthEntry{
|
||||||
|
{URL: "https://github.com/b/b", Name: "B", Stars: 25}, // update
|
||||||
|
{URL: "https://github.com/c/c", Name: "C", Stars: 30}, // new
|
||||||
|
})
|
||||||
|
|
||||||
|
if len(hc.Entries) != 3 {
|
||||||
|
t.Fatalf("entries = %d, want 3", len(hc.Entries))
|
||||||
|
}
|
||||||
|
// B should be updated
|
||||||
|
if hc.Entries[1].Stars != 25 {
|
||||||
|
t.Errorf("B stars = %d, want 25", hc.Entries[1].Stars)
|
||||||
|
}
|
||||||
|
// C should be appended
|
||||||
|
if hc.Entries[2].Name != "C" {
|
||||||
|
t.Errorf("last entry = %q, want C", hc.Entries[2].Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMergeDeduplicatesIncomingBatch(t *testing.T) {
|
||||||
|
hc := &HealthCache{}
|
||||||
|
|
||||||
|
hc.Merge([]HealthEntry{
|
||||||
|
{URL: "https://github.com/c/c", Name: "C", Stars: 1},
|
||||||
|
{URL: "https://github.com/c/c", Name: "C", Stars: 2},
|
||||||
|
})
|
||||||
|
|
||||||
|
if len(hc.Entries) != 1 {
|
||||||
|
t.Fatalf("entries = %d, want 1", len(hc.Entries))
|
||||||
|
}
|
||||||
|
if hc.Entries[0].Stars != 2 {
|
||||||
|
t.Fatalf("stars = %d, want last value 2", hc.Entries[0].Stars)
|
||||||
|
}
|
||||||
|
}
|
||||||
174
internal/checker/github.go
Normal file
174
internal/checker/github.go
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
package checker
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/shurcooL/githubv4"
|
||||||
|
"golang.org/x/oauth2"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RepoInfo holds metadata about a GitHub repository.
|
||||||
|
type RepoInfo struct {
|
||||||
|
Owner string
|
||||||
|
Name string
|
||||||
|
URL string
|
||||||
|
IsArchived bool
|
||||||
|
IsDisabled bool
|
||||||
|
IsPrivate bool
|
||||||
|
PushedAt time.Time
|
||||||
|
Stars int
|
||||||
|
Forks int
|
||||||
|
HasLicense bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExtractGitHubRepo extracts owner/name from a GitHub URL.
|
||||||
|
// Returns false for non-repo URLs (issues, wiki, apps, etc.).
|
||||||
|
func ExtractGitHubRepo(rawURL string) (owner, name string, ok bool) {
|
||||||
|
u, err := url.Parse(rawURL)
|
||||||
|
if err != nil {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
host := strings.ToLower(u.Hostname())
|
||||||
|
if host != "github.com" && host != "www.github.com" {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
path := strings.Trim(u.Path, "/")
|
||||||
|
parts := strings.Split(path, "/")
|
||||||
|
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip known non-repository top-level routes.
|
||||||
|
switch parts[0] {
|
||||||
|
case "apps", "features", "topics":
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
name = strings.TrimSuffix(parts[1], ".git")
|
||||||
|
if name == "" {
|
||||||
|
return "", "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
return parts[0], name, true
|
||||||
|
}
|
||||||
|
|
||||||
|
func isHTTPURL(raw string) bool {
|
||||||
|
u, err := url.Parse(raw)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return u.Scheme == "http" || u.Scheme == "https"
|
||||||
|
}
|
||||||
|
|
||||||
|
func isGitHubAuthError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
s := strings.ToLower(err.Error())
|
||||||
|
return strings.Contains(s, "401 unauthorized") ||
|
||||||
|
strings.Contains(s, "bad credentials") ||
|
||||||
|
strings.Contains(s, "resource not accessible by integration")
|
||||||
|
}
|
||||||
|
|
||||||
|
// PartitionLinks separates URLs into GitHub repos and external HTTP(S) links.
|
||||||
|
func PartitionLinks(urls []string) (github, external []string) {
|
||||||
|
for _, url := range urls {
|
||||||
|
if _, _, ok := ExtractGitHubRepo(url); ok {
|
||||||
|
github = append(github, url)
|
||||||
|
} else if isHTTPURL(url) {
|
||||||
|
external = append(external, url)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// GitHubChecker uses the GitHub GraphQL API.
|
||||||
|
type GitHubChecker struct {
|
||||||
|
client *githubv4.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewGitHubChecker creates a checker with the given OAuth token.
|
||||||
|
func NewGitHubChecker(token string) *GitHubChecker {
|
||||||
|
src := oauth2.StaticTokenSource(&oauth2.Token{AccessToken: token})
|
||||||
|
httpClient := oauth2.NewClient(context.Background(), src)
|
||||||
|
return &GitHubChecker{client: githubv4.NewClient(httpClient)}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckRepo queries a single GitHub repository.
|
||||||
|
func (gc *GitHubChecker) CheckRepo(ctx context.Context, owner, name string) (RepoInfo, error) {
|
||||||
|
var query struct {
|
||||||
|
Repository struct {
|
||||||
|
IsArchived bool
|
||||||
|
IsDisabled bool
|
||||||
|
IsPrivate bool
|
||||||
|
PushedAt time.Time
|
||||||
|
StargazerCount int
|
||||||
|
ForkCount int
|
||||||
|
LicenseInfo *struct {
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
} `graphql:"repository(owner: $owner, name: $name)"`
|
||||||
|
}
|
||||||
|
|
||||||
|
vars := map[string]interface{}{
|
||||||
|
"owner": githubv4.String(owner),
|
||||||
|
"name": githubv4.String(name),
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := gc.client.Query(ctx, &query, vars); err != nil {
|
||||||
|
return RepoInfo{}, fmt.Errorf("github query %s/%s: %w", owner, name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
r := query.Repository
|
||||||
|
return RepoInfo{
|
||||||
|
Owner: owner,
|
||||||
|
Name: name,
|
||||||
|
URL: fmt.Sprintf("https://github.com/%s/%s", owner, name),
|
||||||
|
IsArchived: r.IsArchived,
|
||||||
|
IsDisabled: r.IsDisabled,
|
||||||
|
IsPrivate: r.IsPrivate,
|
||||||
|
PushedAt: r.PushedAt,
|
||||||
|
Stars: r.StargazerCount,
|
||||||
|
Forks: r.ForkCount,
|
||||||
|
HasLicense: r.LicenseInfo != nil,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckRepos queries multiple repos in sequence with rate limiting.
|
||||||
|
func (gc *GitHubChecker) CheckRepos(ctx context.Context, urls []string, batchSize int) ([]RepoInfo, []error) {
|
||||||
|
if batchSize <= 0 {
|
||||||
|
batchSize = 50
|
||||||
|
}
|
||||||
|
|
||||||
|
var results []RepoInfo
|
||||||
|
var errs []error
|
||||||
|
|
||||||
|
for i, url := range urls {
|
||||||
|
owner, name, ok := ExtractGitHubRepo(url)
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
info, err := gc.CheckRepo(ctx, owner, name)
|
||||||
|
if err != nil {
|
||||||
|
errs = append(errs, err)
|
||||||
|
if isGitHubAuthError(err) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
results = append(results, info)
|
||||||
|
|
||||||
|
if (i+1)%batchSize == 0 {
|
||||||
|
time.Sleep(1 * time.Second)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return results, errs
|
||||||
|
}
|
||||||
78
internal/checker/github_test.go
Normal file
78
internal/checker/github_test.go
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
package checker
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestExtractGitHubRepo(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
url string
|
||||||
|
owner string
|
||||||
|
name string
|
||||||
|
ok bool
|
||||||
|
}{
|
||||||
|
{"https://github.com/docker/compose", "docker", "compose", true},
|
||||||
|
{"https://github.com/moby/moby", "moby", "moby", true},
|
||||||
|
{"https://github.com/user/repo/", "user", "repo", true},
|
||||||
|
{"https://github.com/user/repo?tab=readme-ov-file", "user", "repo", true},
|
||||||
|
{"https://github.com/user/repo#readme", "user", "repo", true},
|
||||||
|
{"https://github.com/user/repo.git", "user", "repo", true},
|
||||||
|
{"https://www.github.com/user/repo", "user", "repo", true},
|
||||||
|
{"https://github.com/user/repo/issues", "", "", false},
|
||||||
|
{"https://github.com/user/repo/wiki", "", "", false},
|
||||||
|
{"https://github.com/apps/dependabot", "", "", false},
|
||||||
|
{"https://example.com/not-github", "", "", false},
|
||||||
|
{"https://github.com/user", "", "", false},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
owner, name, ok := ExtractGitHubRepo(tt.url)
|
||||||
|
if ok != tt.ok {
|
||||||
|
t.Errorf("ExtractGitHubRepo(%q): ok = %v, want %v", tt.url, ok, tt.ok)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if ok {
|
||||||
|
if owner != tt.owner || name != tt.name {
|
||||||
|
t.Errorf("ExtractGitHubRepo(%q) = (%q, %q), want (%q, %q)", tt.url, owner, name, tt.owner, tt.name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPartitionLinks(t *testing.T) {
|
||||||
|
urls := []string{
|
||||||
|
"https://github.com/docker/compose",
|
||||||
|
"https://example.com/tool",
|
||||||
|
"https://github.com/moby/moby",
|
||||||
|
"https://github.com/user/repo/issues",
|
||||||
|
"dozzle",
|
||||||
|
"#projects",
|
||||||
|
}
|
||||||
|
gh, ext := PartitionLinks(urls)
|
||||||
|
if len(gh) != 2 {
|
||||||
|
t.Errorf("github links = %d, want 2", len(gh))
|
||||||
|
}
|
||||||
|
if len(ext) != 2 {
|
||||||
|
t.Errorf("external links = %d, want 2", len(ext))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIsGitHubAuthError(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
err error
|
||||||
|
want bool
|
||||||
|
}{
|
||||||
|
{errors.New("non-200 OK status code: 401 Unauthorized body: \"Bad credentials\""), true},
|
||||||
|
{errors.New("Resource not accessible by integration"), true},
|
||||||
|
{errors.New("dial tcp: lookup api.github.com: no such host"), false},
|
||||||
|
{errors.New("context deadline exceeded"), false},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
got := isGitHubAuthError(tt.err)
|
||||||
|
if got != tt.want {
|
||||||
|
t.Errorf("isGitHubAuthError(%q) = %v, want %v", tt.err, got, tt.want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
121
internal/checker/http.go
Normal file
121
internal/checker/http.go
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
package checker
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/http"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
defaultTimeout = 30 * time.Second
|
||||||
|
defaultConcurrency = 10
|
||||||
|
userAgent = "awesome-docker-checker/1.0"
|
||||||
|
)
|
||||||
|
|
||||||
|
// LinkResult holds the result of checking a single URL.
|
||||||
|
type LinkResult struct {
|
||||||
|
URL string
|
||||||
|
OK bool
|
||||||
|
StatusCode int
|
||||||
|
Redirected bool
|
||||||
|
RedirectURL string
|
||||||
|
Error string
|
||||||
|
}
|
||||||
|
|
||||||
|
func shouldFallbackToGET(statusCode int) bool {
|
||||||
|
switch statusCode {
|
||||||
|
case http.StatusBadRequest, http.StatusForbidden, http.StatusMethodNotAllowed, http.StatusNotImplemented:
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckLink checks a single URL. Uses HEAD first, falls back to GET.
|
||||||
|
func CheckLink(url string, client *http.Client) LinkResult {
|
||||||
|
result := LinkResult{URL: url}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
// Track redirects
|
||||||
|
var finalURL string
|
||||||
|
origCheckRedirect := client.CheckRedirect
|
||||||
|
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
|
||||||
|
finalURL = req.URL.String()
|
||||||
|
if len(via) >= 10 {
|
||||||
|
return http.ErrUseLastResponse
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
defer func() { client.CheckRedirect = origCheckRedirect }()
|
||||||
|
|
||||||
|
doRequest := func(method string) (*http.Response, error) {
|
||||||
|
req, err := http.NewRequestWithContext(ctx, method, url, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
req.Header.Set("User-Agent", userAgent)
|
||||||
|
return client.Do(req)
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := doRequest(http.MethodHead)
|
||||||
|
if err != nil {
|
||||||
|
resp, err = doRequest(http.MethodGet)
|
||||||
|
if err != nil {
|
||||||
|
result.Error = err.Error()
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
} else if shouldFallbackToGET(resp.StatusCode) {
|
||||||
|
resp.Body.Close()
|
||||||
|
resp, err = doRequest(http.MethodGet)
|
||||||
|
if err != nil {
|
||||||
|
result.Error = err.Error()
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
result.StatusCode = resp.StatusCode
|
||||||
|
result.OK = resp.StatusCode >= 200 && resp.StatusCode < 400
|
||||||
|
|
||||||
|
if finalURL != "" && finalURL != url {
|
||||||
|
result.Redirected = true
|
||||||
|
result.RedirectURL = finalURL
|
||||||
|
}
|
||||||
|
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckLinks checks multiple URLs concurrently.
|
||||||
|
func CheckLinks(urls []string, concurrency int, exclude *cache.ExcludeList) []LinkResult {
|
||||||
|
if concurrency <= 0 {
|
||||||
|
concurrency = defaultConcurrency
|
||||||
|
}
|
||||||
|
|
||||||
|
results := make([]LinkResult, len(urls))
|
||||||
|
sem := make(chan struct{}, concurrency)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
for i, url := range urls {
|
||||||
|
if exclude != nil && exclude.IsExcluded(url) {
|
||||||
|
results[i] = LinkResult{URL: url, OK: true}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Add(1)
|
||||||
|
go func(idx int, u string) {
|
||||||
|
defer wg.Done()
|
||||||
|
sem <- struct{}{}
|
||||||
|
defer func() { <-sem }()
|
||||||
|
client := &http.Client{Timeout: defaultTimeout}
|
||||||
|
results[idx] = CheckLink(u, client)
|
||||||
|
}(i, url)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
return results
|
||||||
|
}
|
||||||
118
internal/checker/http_test.go
Normal file
118
internal/checker/http_test.go
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
package checker
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCheckLinkOK(t *testing.T) {
|
||||||
|
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
defer server.Close()
|
||||||
|
|
||||||
|
result := CheckLink(server.URL, &http.Client{})
|
||||||
|
if !result.OK {
|
||||||
|
t.Errorf("expected OK, got status %d, error: %s", result.StatusCode, result.Error)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCheckLink404(t *testing.T) {
|
||||||
|
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
}))
|
||||||
|
defer server.Close()
|
||||||
|
|
||||||
|
result := CheckLink(server.URL, &http.Client{})
|
||||||
|
if result.OK {
|
||||||
|
t.Error("expected not OK for 404")
|
||||||
|
}
|
||||||
|
if result.StatusCode != 404 {
|
||||||
|
t.Errorf("status = %d, want 404", result.StatusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCheckLinkRedirect(t *testing.T) {
|
||||||
|
final := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
defer final.Close()
|
||||||
|
|
||||||
|
redir := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
http.Redirect(w, r, final.URL, http.StatusMovedPermanently)
|
||||||
|
}))
|
||||||
|
defer redir.Close()
|
||||||
|
|
||||||
|
result := CheckLink(redir.URL, &http.Client{})
|
||||||
|
if !result.OK {
|
||||||
|
t.Errorf("expected OK after following redirect, error: %s", result.Error)
|
||||||
|
}
|
||||||
|
if !result.Redirected {
|
||||||
|
t.Error("expected Redirected = true")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCheckLinks(t *testing.T) {
|
||||||
|
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/bad" {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
defer server.Close()
|
||||||
|
|
||||||
|
urls := []string{server.URL + "/good", server.URL + "/bad", server.URL + "/also-good"}
|
||||||
|
results := CheckLinks(urls, 2, nil)
|
||||||
|
if len(results) != 3 {
|
||||||
|
t.Fatalf("results = %d, want 3", len(results))
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, r := range results {
|
||||||
|
if r.URL == server.URL+"/bad" && r.OK {
|
||||||
|
t.Error("expected /bad to not be OK")
|
||||||
|
}
|
||||||
|
if r.URL == server.URL+"/good" && !r.OK {
|
||||||
|
t.Error("expected /good to be OK")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCheckLinkFallbackToGETOnMethodNotAllowed(t *testing.T) {
|
||||||
|
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Method == http.MethodHead {
|
||||||
|
w.WriteHeader(http.StatusMethodNotAllowed)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
defer server.Close()
|
||||||
|
|
||||||
|
result := CheckLink(server.URL, &http.Client{})
|
||||||
|
if !result.OK {
|
||||||
|
t.Errorf("expected OK after GET fallback, got status %d, error: %s", result.StatusCode, result.Error)
|
||||||
|
}
|
||||||
|
if result.StatusCode != http.StatusOK {
|
||||||
|
t.Errorf("status = %d, want 200", result.StatusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCheckLinkFallbackToGETOnForbiddenHead(t *testing.T) {
|
||||||
|
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.Method == http.MethodHead {
|
||||||
|
w.WriteHeader(http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}))
|
||||||
|
defer server.Close()
|
||||||
|
|
||||||
|
result := CheckLink(server.URL, &http.Client{})
|
||||||
|
if !result.OK {
|
||||||
|
t.Errorf("expected OK after GET fallback, got status %d, error: %s", result.StatusCode, result.Error)
|
||||||
|
}
|
||||||
|
if result.StatusCode != http.StatusOK {
|
||||||
|
t.Errorf("status = %d, want 200", result.StatusCode)
|
||||||
|
}
|
||||||
|
}
|
||||||
147
internal/linter/fixer.go
Normal file
147
internal/linter/fixer.go
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
package linter
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/parser"
|
||||||
|
)
|
||||||
|
|
||||||
|
// attributionRe matches trailing author attributions like:
|
||||||
|
//
|
||||||
|
// by [@author](url), by [@author][ref], by @author
|
||||||
|
//
|
||||||
|
// Also handles "Created by", "Maintained by" etc.
|
||||||
|
var attributionRe = regexp.MustCompile(`\s+(?:(?:[Cc]reated|[Mm]aintained|[Bb]uilt)\s+)?by\s+\[@[^\]]+\](?:\([^)]*\)|\[[^\]]*\])\.?$`)
|
||||||
|
|
||||||
|
// bareAttributionRe matches: by @author at end of line (no link).
|
||||||
|
var bareAttributionRe = regexp.MustCompile(`\s+by\s+@\w+\.?$`)
|
||||||
|
|
||||||
|
// sectionHeadingRe matches markdown headings.
|
||||||
|
var sectionHeadingRe = regexp.MustCompile(`^(#{1,6})\s+(.+?)(?:\s*<!--.*-->)?$`)
|
||||||
|
|
||||||
|
// RemoveAttribution strips author attribution from a description string.
|
||||||
|
func RemoveAttribution(desc string) string {
|
||||||
|
desc = attributionRe.ReplaceAllString(desc, "")
|
||||||
|
desc = bareAttributionRe.ReplaceAllString(desc, "")
|
||||||
|
return strings.TrimSpace(desc)
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatEntry reconstructs a markdown list line from a parsed Entry.
|
||||||
|
func FormatEntry(e parser.Entry) string {
|
||||||
|
desc := e.Description
|
||||||
|
var markers []string
|
||||||
|
for _, m := range e.Markers {
|
||||||
|
switch m {
|
||||||
|
case parser.MarkerAbandoned:
|
||||||
|
markers = append(markers, ":skull:")
|
||||||
|
case parser.MarkerPaid:
|
||||||
|
markers = append(markers, ":yen:")
|
||||||
|
case parser.MarkerWIP:
|
||||||
|
markers = append(markers, ":construction:")
|
||||||
|
case parser.MarkerStale:
|
||||||
|
markers = append(markers, ":ice_cube:")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(markers) > 0 {
|
||||||
|
desc = strings.Join(markers, " ") + " " + desc
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("- [%s](%s) - %s", e.Name, e.URL, desc)
|
||||||
|
}
|
||||||
|
|
||||||
|
// FixFile reads the README, fixes entries (capitalize, period, remove attribution,
|
||||||
|
// sort), and writes the result back.
|
||||||
|
func FixFile(path string) (int, error) {
|
||||||
|
f, err := os.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
var lines []string
|
||||||
|
scanner := bufio.NewScanner(f)
|
||||||
|
for scanner.Scan() {
|
||||||
|
lines = append(lines, scanner.Text())
|
||||||
|
}
|
||||||
|
if err := scanner.Err(); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
fixCount := 0
|
||||||
|
|
||||||
|
var headingLines []int
|
||||||
|
for i, line := range lines {
|
||||||
|
if sectionHeadingRe.MatchString(line) {
|
||||||
|
headingLines = append(headingLines, i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process each heading block independently to match linter sort scope.
|
||||||
|
for i, headingIdx := range headingLines {
|
||||||
|
start := headingIdx + 1
|
||||||
|
end := len(lines)
|
||||||
|
if i+1 < len(headingLines) {
|
||||||
|
end = headingLines[i+1]
|
||||||
|
}
|
||||||
|
|
||||||
|
var entryPositions []int
|
||||||
|
var entries []parser.Entry
|
||||||
|
for lineIdx := start; lineIdx < end; lineIdx++ {
|
||||||
|
entry, err := parser.ParseEntry(lines[lineIdx], lineIdx+1)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
entryPositions = append(entryPositions, lineIdx)
|
||||||
|
entries = append(entries, entry)
|
||||||
|
}
|
||||||
|
if len(entries) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
var fixed []parser.Entry
|
||||||
|
for _, e := range entries {
|
||||||
|
f := FixEntry(e)
|
||||||
|
f.Description = RemoveAttribution(f.Description)
|
||||||
|
// Re-apply period after removing attribution (it may have been stripped)
|
||||||
|
if len(f.Description) > 0 && !strings.HasSuffix(f.Description, ".") {
|
||||||
|
f.Description += "."
|
||||||
|
}
|
||||||
|
fixed = append(fixed, f)
|
||||||
|
}
|
||||||
|
|
||||||
|
sorted := SortEntries(fixed)
|
||||||
|
for j, e := range sorted {
|
||||||
|
newLine := FormatEntry(e)
|
||||||
|
lineIdx := entryPositions[j]
|
||||||
|
if lines[lineIdx] != newLine {
|
||||||
|
fixCount++
|
||||||
|
lines[lineIdx] = newLine
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if fixCount == 0 {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write back
|
||||||
|
out, err := os.Create(path)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
defer out.Close()
|
||||||
|
|
||||||
|
w := bufio.NewWriter(out)
|
||||||
|
for i, line := range lines {
|
||||||
|
w.WriteString(line)
|
||||||
|
if i < len(lines)-1 {
|
||||||
|
w.WriteString("\n")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Preserve trailing newline if original had one
|
||||||
|
w.WriteString("\n")
|
||||||
|
return fixCount, w.Flush()
|
||||||
|
}
|
||||||
193
internal/linter/fixer_test.go
Normal file
193
internal/linter/fixer_test.go
Normal file
@@ -0,0 +1,193 @@
|
|||||||
|
package linter
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/parser"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRemoveAttribution(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
input string
|
||||||
|
want string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
"Tool for managing containers by [@author](https://github.com/author)",
|
||||||
|
"Tool for managing containers",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Tool for managing containers by [@author][author]",
|
||||||
|
"Tool for managing containers",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Tool for managing containers by @author",
|
||||||
|
"Tool for managing containers",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Analyzes resource usage. Created by [@Google][google]",
|
||||||
|
"Analyzes resource usage.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"A tool by [@someone](https://example.com).",
|
||||||
|
"A tool",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"step-by-step tutorial and more resources",
|
||||||
|
"step-by-step tutorial and more resources",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"No attribution here",
|
||||||
|
"No attribution here",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
got := RemoveAttribution(tt.input)
|
||||||
|
if got != tt.want {
|
||||||
|
t.Errorf("RemoveAttribution(%q) = %q, want %q", tt.input, got, tt.want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFormatEntry(t *testing.T) {
|
||||||
|
e := parser.Entry{
|
||||||
|
Name: "Portainer",
|
||||||
|
URL: "https://github.com/portainer/portainer",
|
||||||
|
Description: "Management UI for Docker.",
|
||||||
|
}
|
||||||
|
got := FormatEntry(e)
|
||||||
|
want := "- [Portainer](https://github.com/portainer/portainer) - Management UI for Docker."
|
||||||
|
if got != want {
|
||||||
|
t.Errorf("FormatEntry = %q, want %q", got, want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFormatEntryWithMarkers(t *testing.T) {
|
||||||
|
e := parser.Entry{
|
||||||
|
Name: "OldTool",
|
||||||
|
URL: "https://github.com/old/tool",
|
||||||
|
Description: "A deprecated tool.",
|
||||||
|
Markers: []parser.Marker{parser.MarkerAbandoned},
|
||||||
|
}
|
||||||
|
got := FormatEntry(e)
|
||||||
|
want := "- [OldTool](https://github.com/old/tool) - :skull: A deprecated tool."
|
||||||
|
if got != want {
|
||||||
|
t.Errorf("FormatEntry = %q, want %q", got, want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFixFile(t *testing.T) {
|
||||||
|
content := `# Awesome Docker
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
- [Zebra](https://example.com/zebra) - a tool by [@author](https://github.com/author)
|
||||||
|
- [Alpha](https://example.com/alpha) - another tool
|
||||||
|
|
||||||
|
## Other
|
||||||
|
|
||||||
|
Some text here.
|
||||||
|
`
|
||||||
|
tmp, err := os.CreateTemp("", "readme-*.md")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.Remove(tmp.Name())
|
||||||
|
|
||||||
|
if _, err := tmp.WriteString(content); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
tmp.Close()
|
||||||
|
|
||||||
|
count, err := FixFile(tmp.Name())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if count == 0 {
|
||||||
|
t.Fatal("expected fixes, got 0")
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := os.ReadFile(tmp.Name())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
result := string(data)
|
||||||
|
|
||||||
|
// Check sorting: Alpha should come before Zebra
|
||||||
|
alphaIdx := strings.Index(result, "[Alpha]")
|
||||||
|
zebraIdx := strings.Index(result, "[Zebra]")
|
||||||
|
if alphaIdx > zebraIdx {
|
||||||
|
t.Error("expected Alpha before Zebra after sort")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check capitalization
|
||||||
|
if !strings.Contains(result, "- A tool.") {
|
||||||
|
t.Errorf("expected capitalized description, got:\n%s", result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check attribution removed
|
||||||
|
if strings.Contains(result, "@author") {
|
||||||
|
t.Errorf("expected attribution removed, got:\n%s", result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check period added
|
||||||
|
if !strings.Contains(result, "Another tool.") {
|
||||||
|
t.Errorf("expected period added, got:\n%s", result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFixFileSortsAcrossBlankLinesAndIsIdempotent(t *testing.T) {
|
||||||
|
content := `# Awesome Docker
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
- [Zulu](https://example.com/zulu) - z tool
|
||||||
|
|
||||||
|
- [Alpha](https://example.com/alpha) - a tool
|
||||||
|
`
|
||||||
|
|
||||||
|
tmp, err := os.CreateTemp("", "readme-*.md")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.Remove(tmp.Name())
|
||||||
|
|
||||||
|
if _, err := tmp.WriteString(content); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
tmp.Close()
|
||||||
|
|
||||||
|
firstCount, err := FixFile(tmp.Name())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if firstCount == 0 {
|
||||||
|
t.Fatal("expected first run to apply fixes")
|
||||||
|
}
|
||||||
|
|
||||||
|
firstData, err := os.ReadFile(tmp.Name())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
firstResult := string(firstData)
|
||||||
|
|
||||||
|
alphaIdx := strings.Index(firstResult, "[Alpha]")
|
||||||
|
zuluIdx := strings.Index(firstResult, "[Zulu]")
|
||||||
|
if alphaIdx == -1 || zuluIdx == -1 {
|
||||||
|
t.Fatalf("expected both Alpha and Zulu in result:\n%s", firstResult)
|
||||||
|
}
|
||||||
|
if alphaIdx > zuluIdx {
|
||||||
|
t.Fatalf("expected Alpha before Zulu after fix:\n%s", firstResult)
|
||||||
|
}
|
||||||
|
|
||||||
|
secondCount, err := FixFile(tmp.Name())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if secondCount != 0 {
|
||||||
|
t.Fatalf("expected second run to be idempotent, got %d changes", secondCount)
|
||||||
|
}
|
||||||
|
}
|
||||||
60
internal/linter/linter.go
Normal file
60
internal/linter/linter.go
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
package linter
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/parser"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Result holds all lint issues found.
|
||||||
|
type Result struct {
|
||||||
|
Issues []Issue
|
||||||
|
Errors int
|
||||||
|
Warnings int
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lint checks an entire parsed document for issues.
|
||||||
|
func Lint(doc parser.Document) Result {
|
||||||
|
var result Result
|
||||||
|
|
||||||
|
// Collect all entries for duplicate checking
|
||||||
|
allEntries := collectEntries(doc.Sections)
|
||||||
|
for _, issue := range CheckDuplicates(allEntries) {
|
||||||
|
addIssue(&result, issue)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check each section
|
||||||
|
lintSections(doc.Sections, &result)
|
||||||
|
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func lintSections(sections []parser.Section, result *Result) {
|
||||||
|
for _, s := range sections {
|
||||||
|
for _, e := range s.Entries {
|
||||||
|
for _, issue := range CheckEntry(e) {
|
||||||
|
addIssue(result, issue)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, issue := range CheckSorted(s.Entries) {
|
||||||
|
addIssue(result, issue)
|
||||||
|
}
|
||||||
|
lintSections(s.Children, result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func collectEntries(sections []parser.Section) []parser.Entry {
|
||||||
|
var all []parser.Entry
|
||||||
|
for _, s := range sections {
|
||||||
|
all = append(all, s.Entries...)
|
||||||
|
all = append(all, collectEntries(s.Children)...)
|
||||||
|
}
|
||||||
|
return all
|
||||||
|
}
|
||||||
|
|
||||||
|
func addIssue(result *Result, issue Issue) {
|
||||||
|
result.Issues = append(result.Issues, issue)
|
||||||
|
if issue.Severity == SeverityError {
|
||||||
|
result.Errors++
|
||||||
|
} else {
|
||||||
|
result.Warnings++
|
||||||
|
}
|
||||||
|
}
|
||||||
111
internal/linter/linter_test.go
Normal file
111
internal/linter/linter_test.go
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
package linter
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/parser"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRuleDescriptionCapital(t *testing.T) {
|
||||||
|
entry := parser.Entry{Name: "Test", URL: "https://example.com", Description: "lowercase start.", Line: 10}
|
||||||
|
issues := CheckEntry(entry)
|
||||||
|
found := false
|
||||||
|
for _, issue := range issues {
|
||||||
|
if issue.Rule == RuleDescriptionCapital {
|
||||||
|
found = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
t.Error("expected RuleDescriptionCapital issue for lowercase description")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRuleDescriptionPeriod(t *testing.T) {
|
||||||
|
entry := parser.Entry{Name: "Test", URL: "https://example.com", Description: "No period at end", Line: 10}
|
||||||
|
issues := CheckEntry(entry)
|
||||||
|
found := false
|
||||||
|
for _, issue := range issues {
|
||||||
|
if issue.Rule == RuleDescriptionPeriod {
|
||||||
|
found = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
t.Error("expected RuleDescriptionPeriod issue")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRuleSorted(t *testing.T) {
|
||||||
|
entries := []parser.Entry{
|
||||||
|
{Name: "Zebra", URL: "https://z.com", Description: "Z.", Line: 1},
|
||||||
|
{Name: "Alpha", URL: "https://a.com", Description: "A.", Line: 2},
|
||||||
|
}
|
||||||
|
issues := CheckSorted(entries)
|
||||||
|
if len(issues) == 0 {
|
||||||
|
t.Error("expected sorting issue")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRuleSortedOK(t *testing.T) {
|
||||||
|
entries := []parser.Entry{
|
||||||
|
{Name: "Alpha", URL: "https://a.com", Description: "A.", Line: 1},
|
||||||
|
{Name: "Zebra", URL: "https://z.com", Description: "Z.", Line: 2},
|
||||||
|
}
|
||||||
|
issues := CheckSorted(entries)
|
||||||
|
if len(issues) != 0 {
|
||||||
|
t.Errorf("expected no sorting issues, got %d", len(issues))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRuleDuplicateURL(t *testing.T) {
|
||||||
|
entries := []parser.Entry{
|
||||||
|
{Name: "A", URL: "https://example.com/a", Description: "A.", Line: 1},
|
||||||
|
{Name: "B", URL: "https://example.com/a", Description: "B.", Line: 5},
|
||||||
|
}
|
||||||
|
issues := CheckDuplicates(entries)
|
||||||
|
if len(issues) == 0 {
|
||||||
|
t.Error("expected duplicate URL issue")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidEntry(t *testing.T) {
|
||||||
|
entry := parser.Entry{Name: "Good", URL: "https://example.com", Description: "A good project.", Line: 10}
|
||||||
|
issues := CheckEntry(entry)
|
||||||
|
if len(issues) != 0 {
|
||||||
|
t.Errorf("expected no issues, got %v", issues)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFixDescriptionCapital(t *testing.T) {
|
||||||
|
entry := parser.Entry{Name: "Test", URL: "https://example.com", Description: "lowercase.", Line: 10}
|
||||||
|
fixed := FixEntry(entry)
|
||||||
|
if fixed.Description != "Lowercase." {
|
||||||
|
t.Errorf("description = %q, want %q", fixed.Description, "Lowercase.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFixDescriptionPeriod(t *testing.T) {
|
||||||
|
entry := parser.Entry{Name: "Test", URL: "https://example.com", Description: "No period", Line: 10}
|
||||||
|
fixed := FixEntry(entry)
|
||||||
|
if fixed.Description != "No period." {
|
||||||
|
t.Errorf("description = %q, want %q", fixed.Description, "No period.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLintDocument(t *testing.T) {
|
||||||
|
doc := parser.Document{
|
||||||
|
Sections: []parser.Section{
|
||||||
|
{
|
||||||
|
Title: "Tools",
|
||||||
|
Level: 2,
|
||||||
|
Entries: []parser.Entry{
|
||||||
|
{Name: "Zebra", URL: "https://z.com", Description: "Z tool.", Line: 1},
|
||||||
|
{Name: "Alpha", URL: "https://a.com", Description: "a tool", Line: 2},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
result := Lint(doc)
|
||||||
|
if result.Errors == 0 {
|
||||||
|
t.Error("expected errors (unsorted, lowercase, no period)")
|
||||||
|
}
|
||||||
|
}
|
||||||
149
internal/linter/rules.go
Normal file
149
internal/linter/rules.go
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
package linter
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"unicode"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/parser"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Rule identifies a linting rule.
|
||||||
|
type Rule string
|
||||||
|
|
||||||
|
const (
|
||||||
|
RuleDescriptionCapital Rule = "description-capital"
|
||||||
|
RuleDescriptionPeriod Rule = "description-period"
|
||||||
|
RuleSorted Rule = "sorted"
|
||||||
|
RuleDuplicateURL Rule = "duplicate-url"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Severity of a lint issue.
|
||||||
|
type Severity int
|
||||||
|
|
||||||
|
const (
|
||||||
|
SeverityError Severity = iota
|
||||||
|
SeverityWarning
|
||||||
|
)
|
||||||
|
|
||||||
|
// Issue is a single lint problem found.
|
||||||
|
type Issue struct {
|
||||||
|
Rule Rule
|
||||||
|
Severity Severity
|
||||||
|
Line int
|
||||||
|
Message string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i Issue) String() string {
|
||||||
|
sev := "ERROR"
|
||||||
|
if i.Severity == SeverityWarning {
|
||||||
|
sev = "WARN"
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("[%s] line %d: %s (%s)", sev, i.Line, i.Message, i.Rule)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckEntry validates a single entry against formatting rules.
|
||||||
|
func CheckEntry(e parser.Entry) []Issue {
|
||||||
|
var issues []Issue
|
||||||
|
|
||||||
|
if first, ok := firstLetter(e.Description); ok && !unicode.IsUpper(first) {
|
||||||
|
issues = append(issues, Issue{
|
||||||
|
Rule: RuleDescriptionCapital,
|
||||||
|
Severity: SeverityError,
|
||||||
|
Line: e.Line,
|
||||||
|
Message: fmt.Sprintf("%q: description should start with a capital letter", e.Name),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(e.Description) > 0 && !strings.HasSuffix(e.Description, ".") {
|
||||||
|
issues = append(issues, Issue{
|
||||||
|
Rule: RuleDescriptionPeriod,
|
||||||
|
Severity: SeverityError,
|
||||||
|
Line: e.Line,
|
||||||
|
Message: fmt.Sprintf("%q: description should end with a period", e.Name),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return issues
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckSorted verifies entries are in alphabetical order (case-insensitive).
|
||||||
|
func CheckSorted(entries []parser.Entry) []Issue {
|
||||||
|
var issues []Issue
|
||||||
|
for i := 1; i < len(entries); i++ {
|
||||||
|
prev := strings.ToLower(entries[i-1].Name)
|
||||||
|
curr := strings.ToLower(entries[i].Name)
|
||||||
|
if prev > curr {
|
||||||
|
issues = append(issues, Issue{
|
||||||
|
Rule: RuleSorted,
|
||||||
|
Severity: SeverityError,
|
||||||
|
Line: entries[i].Line,
|
||||||
|
Message: fmt.Sprintf("%q should come before %q (alphabetical order)", entries[i].Name, entries[i-1].Name),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return issues
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckDuplicates finds entries with the same URL across the entire document.
|
||||||
|
func CheckDuplicates(entries []parser.Entry) []Issue {
|
||||||
|
var issues []Issue
|
||||||
|
seen := make(map[string]int) // URL -> first line number
|
||||||
|
for _, e := range entries {
|
||||||
|
url := strings.TrimRight(e.URL, "/")
|
||||||
|
if firstLine, exists := seen[url]; exists {
|
||||||
|
issues = append(issues, Issue{
|
||||||
|
Rule: RuleDuplicateURL,
|
||||||
|
Severity: SeverityError,
|
||||||
|
Line: e.Line,
|
||||||
|
Message: fmt.Sprintf("duplicate URL %q (first seen at line %d)", e.URL, firstLine),
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
seen[url] = e.Line
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return issues
|
||||||
|
}
|
||||||
|
|
||||||
|
// firstLetter returns the first unicode letter in s and true, or zero and false if none.
|
||||||
|
func firstLetter(s string) (rune, bool) {
|
||||||
|
for _, r := range s {
|
||||||
|
if unicode.IsLetter(r) {
|
||||||
|
return r, true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// FixEntry returns a copy of the entry with auto-fixable issues corrected.
|
||||||
|
func FixEntry(e parser.Entry) parser.Entry {
|
||||||
|
fixed := e
|
||||||
|
if len(fixed.Description) > 0 {
|
||||||
|
// Capitalize first letter (find it, may not be at index 0)
|
||||||
|
runes := []rune(fixed.Description)
|
||||||
|
for i, r := range runes {
|
||||||
|
if unicode.IsLetter(r) {
|
||||||
|
runes[i] = unicode.ToUpper(r)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fixed.Description = string(runes)
|
||||||
|
|
||||||
|
// Ensure period at end
|
||||||
|
if !strings.HasSuffix(fixed.Description, ".") {
|
||||||
|
fixed.Description += "."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return fixed
|
||||||
|
}
|
||||||
|
|
||||||
|
// SortEntries returns a sorted copy of entries (case-insensitive by Name).
|
||||||
|
func SortEntries(entries []parser.Entry) []parser.Entry {
|
||||||
|
sorted := make([]parser.Entry, len(entries))
|
||||||
|
copy(sorted, entries)
|
||||||
|
sort.Slice(sorted, func(i, j int) bool {
|
||||||
|
return strings.ToLower(sorted[i].Name) < strings.ToLower(sorted[j].Name)
|
||||||
|
})
|
||||||
|
return sorted
|
||||||
|
}
|
||||||
140
internal/parser/parser.go
Normal file
140
internal/parser/parser.go
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
package parser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"regexp"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// entryRe matches: - [Name](URL) - Description
|
||||||
|
// Also handles optional markers/text between URL and " - " separator, e.g.:
|
||||||
|
//
|
||||||
|
// - [Name](URL) :skull: - Description
|
||||||
|
// - [Name](URL) (2) :skull: - Description
|
||||||
|
var entryRe = regexp.MustCompile(`^[-*]\s+\[([^\]]+)\]\(([^)]+)\)(.*?)\s+-\s+(.+)$`)
|
||||||
|
|
||||||
|
// headingRe matches markdown headings: # Title, ## Title, etc.
|
||||||
|
var headingRe = regexp.MustCompile(`^(#{1,6})\s+(.+?)(?:\s*<!--.*-->)?$`)
|
||||||
|
|
||||||
|
var markerDefs = []struct {
|
||||||
|
text string
|
||||||
|
marker Marker
|
||||||
|
}{
|
||||||
|
{text: ":skull:", marker: MarkerAbandoned},
|
||||||
|
{text: ":yen:", marker: MarkerPaid},
|
||||||
|
{text: ":construction:", marker: MarkerWIP},
|
||||||
|
{text: ":ice_cube:", marker: MarkerStale},
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseEntry parses a single markdown list line into an Entry.
|
||||||
|
func ParseEntry(line string, lineNum int) (Entry, error) {
|
||||||
|
m := entryRe.FindStringSubmatch(strings.TrimSpace(line))
|
||||||
|
if m == nil {
|
||||||
|
return Entry{}, fmt.Errorf("line %d: not a valid entry: %q", lineNum, line)
|
||||||
|
}
|
||||||
|
|
||||||
|
middle := m[3] // text between URL closing paren and " - "
|
||||||
|
desc := m[4]
|
||||||
|
var markers []Marker
|
||||||
|
|
||||||
|
// Extract markers from both the middle section and the description
|
||||||
|
for _, def := range markerDefs {
|
||||||
|
if strings.Contains(middle, def.text) || strings.Contains(desc, def.text) {
|
||||||
|
markers = append(markers, def.marker)
|
||||||
|
middle = strings.ReplaceAll(middle, def.text, "")
|
||||||
|
desc = strings.ReplaceAll(desc, def.text, "")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
desc = strings.TrimSpace(desc)
|
||||||
|
|
||||||
|
return Entry{
|
||||||
|
Name: m[1],
|
||||||
|
URL: m[2],
|
||||||
|
Description: desc,
|
||||||
|
Markers: markers,
|
||||||
|
Line: lineNum,
|
||||||
|
Raw: line,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse reads a full README and returns a Document.
|
||||||
|
func Parse(r io.Reader) (Document, error) {
|
||||||
|
scanner := bufio.NewScanner(r)
|
||||||
|
var doc Document
|
||||||
|
var allSections []struct {
|
||||||
|
section Section
|
||||||
|
level int
|
||||||
|
}
|
||||||
|
|
||||||
|
lineNum := 0
|
||||||
|
for scanner.Scan() {
|
||||||
|
lineNum++
|
||||||
|
line := scanner.Text()
|
||||||
|
|
||||||
|
// Check for heading
|
||||||
|
if hm := headingRe.FindStringSubmatch(line); hm != nil {
|
||||||
|
level := len(hm[1])
|
||||||
|
title := strings.TrimSpace(hm[2])
|
||||||
|
allSections = append(allSections, struct {
|
||||||
|
section Section
|
||||||
|
level int
|
||||||
|
}{
|
||||||
|
section: Section{Title: title, Level: level, Line: lineNum},
|
||||||
|
level: level,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for entry (list item with link)
|
||||||
|
if entry, err := ParseEntry(line, lineNum); err == nil {
|
||||||
|
if len(allSections) > 0 {
|
||||||
|
allSections[len(allSections)-1].section.Entries = append(
|
||||||
|
allSections[len(allSections)-1].section.Entries, entry)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Everything else: preamble if no sections yet
|
||||||
|
if len(allSections) == 0 {
|
||||||
|
doc.Preamble = append(doc.Preamble, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := scanner.Err(); err != nil {
|
||||||
|
return doc, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build section tree by nesting based on heading level
|
||||||
|
doc.Sections = buildTree(allSections)
|
||||||
|
return doc, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildTree(flat []struct {
|
||||||
|
section Section
|
||||||
|
level int
|
||||||
|
},
|
||||||
|
) []Section {
|
||||||
|
if len(flat) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var result []Section
|
||||||
|
for i := 0; i < len(flat); i++ {
|
||||||
|
current := flat[i].section
|
||||||
|
currentLevel := flat[i].level
|
||||||
|
|
||||||
|
// Collect children: everything after this heading at a deeper level
|
||||||
|
j := i + 1
|
||||||
|
for j < len(flat) && flat[j].level > currentLevel {
|
||||||
|
j++
|
||||||
|
}
|
||||||
|
if j > i+1 {
|
||||||
|
current.Children = buildTree(flat[i+1 : j])
|
||||||
|
}
|
||||||
|
result = append(result, current)
|
||||||
|
i = j - 1
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
161
internal/parser/parser_test.go
Normal file
161
internal/parser/parser_test.go
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
package parser
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestParseEntry(t *testing.T) {
|
||||||
|
line := `- [Docker Desktop](https://www.docker.com/products/docker-desktop/) - Official native app. Only for Windows and MacOS.`
|
||||||
|
entry, err := ParseEntry(line, 1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if entry.Name != "Docker Desktop" {
|
||||||
|
t.Errorf("name = %q, want %q", entry.Name, "Docker Desktop")
|
||||||
|
}
|
||||||
|
if entry.URL != "https://www.docker.com/products/docker-desktop/" {
|
||||||
|
t.Errorf("url = %q, want %q", entry.URL, "https://www.docker.com/products/docker-desktop/")
|
||||||
|
}
|
||||||
|
if entry.Description != "Official native app. Only for Windows and MacOS." {
|
||||||
|
t.Errorf("description = %q, want %q", entry.Description, "Official native app. Only for Windows and MacOS.")
|
||||||
|
}
|
||||||
|
if len(entry.Markers) != 0 {
|
||||||
|
t.Errorf("markers = %v, want empty", entry.Markers)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseEntryWithMarkers(t *testing.T) {
|
||||||
|
line := `- [Docker Swarm](https://github.com/docker/swarm) - Swarm clustering system. :skull:`
|
||||||
|
entry, err := ParseEntry(line, 1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if entry.Name != "Docker Swarm" {
|
||||||
|
t.Errorf("name = %q, want %q", entry.Name, "Docker Swarm")
|
||||||
|
}
|
||||||
|
if len(entry.Markers) != 1 || entry.Markers[0] != MarkerAbandoned {
|
||||||
|
t.Errorf("markers = %v, want [MarkerAbandoned]", entry.Markers)
|
||||||
|
}
|
||||||
|
if strings.Contains(entry.Description, ":skull:") {
|
||||||
|
t.Errorf("description should not contain marker text, got %q", entry.Description)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseEntryMultipleMarkers(t *testing.T) {
|
||||||
|
line := `- [SomeProject](https://example.com) - A project. :yen: :construction:`
|
||||||
|
entry, err := ParseEntry(line, 1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if len(entry.Markers) != 2 {
|
||||||
|
t.Fatalf("markers count = %d, want 2", len(entry.Markers))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseEntryMarkersCanonicalOrder(t *testing.T) {
|
||||||
|
line := `- [SomeProject](https://example.com) - :construction: A project. :skull:`
|
||||||
|
entry, err := ParseEntry(line, 1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if len(entry.Markers) != 2 {
|
||||||
|
t.Fatalf("markers count = %d, want 2", len(entry.Markers))
|
||||||
|
}
|
||||||
|
if entry.Markers[0] != MarkerAbandoned || entry.Markers[1] != MarkerWIP {
|
||||||
|
t.Fatalf("marker order = %v, want [MarkerAbandoned MarkerWIP]", entry.Markers)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseDocument(t *testing.T) {
|
||||||
|
input := `# Awesome Docker
|
||||||
|
|
||||||
|
> A curated list
|
||||||
|
|
||||||
|
# Contents
|
||||||
|
|
||||||
|
- [Projects](#projects)
|
||||||
|
|
||||||
|
# Legend
|
||||||
|
|
||||||
|
- Abandoned :skull:
|
||||||
|
|
||||||
|
# Projects
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
- [ToolA](https://github.com/a/a) - Does A.
|
||||||
|
- [ToolB](https://github.com/b/b) - Does B. :skull:
|
||||||
|
|
||||||
|
## Services
|
||||||
|
|
||||||
|
- [ServiceC](https://example.com/c) - Does C. :yen:
|
||||||
|
`
|
||||||
|
doc, err := Parse(strings.NewReader(input))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if len(doc.Sections) == 0 {
|
||||||
|
t.Fatal("expected at least one section")
|
||||||
|
}
|
||||||
|
// Find the "Projects" section
|
||||||
|
var projects *Section
|
||||||
|
for i := range doc.Sections {
|
||||||
|
if doc.Sections[i].Title == "Projects" {
|
||||||
|
projects = &doc.Sections[i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if projects == nil {
|
||||||
|
t.Fatal("expected a Projects section")
|
||||||
|
}
|
||||||
|
if len(projects.Children) != 2 {
|
||||||
|
t.Errorf("projects children = %d, want 2", len(projects.Children))
|
||||||
|
}
|
||||||
|
if projects.Children[0].Title != "Tools" {
|
||||||
|
t.Errorf("first child = %q, want %q", projects.Children[0].Title, "Tools")
|
||||||
|
}
|
||||||
|
if len(projects.Children[0].Entries) != 2 {
|
||||||
|
t.Errorf("Tools entries = %d, want 2", len(projects.Children[0].Entries))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseNotAnEntry(t *testing.T) {
|
||||||
|
_, err := ParseEntry("- Abandoned :skull:", 1)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("expected error for non-entry list item")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseRealREADME(t *testing.T) {
|
||||||
|
f, err := os.Open("../../README.md")
|
||||||
|
if err != nil {
|
||||||
|
t.Skip("README.md not found, skipping integration test")
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
doc, err := Parse(f)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to parse README: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(doc.Sections) == 0 {
|
||||||
|
t.Error("expected sections")
|
||||||
|
}
|
||||||
|
|
||||||
|
total := countEntries(doc.Sections)
|
||||||
|
if total < 100 {
|
||||||
|
t.Errorf("expected at least 100 entries, got %d", total)
|
||||||
|
}
|
||||||
|
t.Logf("Parsed %d sections, %d total entries", len(doc.Sections), total)
|
||||||
|
}
|
||||||
|
|
||||||
|
func countEntries(sections []Section) int {
|
||||||
|
n := 0
|
||||||
|
for _, s := range sections {
|
||||||
|
n += len(s.Entries)
|
||||||
|
n += countEntries(s.Children)
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
36
internal/parser/types.go
Normal file
36
internal/parser/types.go
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
package parser
|
||||||
|
|
||||||
|
// Marker represents a status emoji on an entry.
|
||||||
|
type Marker int
|
||||||
|
|
||||||
|
const (
|
||||||
|
MarkerAbandoned Marker = iota // :skull:
|
||||||
|
MarkerPaid // :yen:
|
||||||
|
MarkerWIP // :construction:
|
||||||
|
MarkerStale // :ice_cube:
|
||||||
|
)
|
||||||
|
|
||||||
|
// Entry is a single link entry in the README.
|
||||||
|
type Entry struct {
|
||||||
|
Name string
|
||||||
|
URL string
|
||||||
|
Description string
|
||||||
|
Markers []Marker
|
||||||
|
Line int // 1-based line number in source
|
||||||
|
Raw string // original line text
|
||||||
|
}
|
||||||
|
|
||||||
|
// Section is a heading with optional entries and child sections.
|
||||||
|
type Section struct {
|
||||||
|
Title string
|
||||||
|
Level int // heading level: 1 = #, 2 = ##, etc.
|
||||||
|
Entries []Entry
|
||||||
|
Children []Section
|
||||||
|
Line int
|
||||||
|
}
|
||||||
|
|
||||||
|
// Document is the parsed representation of the full README.
|
||||||
|
type Document struct {
|
||||||
|
Preamble []string // lines before the first section
|
||||||
|
Sections []Section
|
||||||
|
}
|
||||||
178
internal/scorer/scorer.go
Normal file
178
internal/scorer/scorer.go
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
package scorer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/checker"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Status represents the health status of an entry.
|
||||||
|
type Status string
|
||||||
|
|
||||||
|
const (
|
||||||
|
StatusHealthy Status = "healthy"
|
||||||
|
StatusInactive Status = "inactive" // 1-2 years since last push
|
||||||
|
StatusStale Status = "stale" // 2+ years since last push
|
||||||
|
StatusArchived Status = "archived"
|
||||||
|
StatusDead Status = "dead" // disabled or 404
|
||||||
|
)
|
||||||
|
|
||||||
|
// ScoredEntry is a repo with its computed health status.
|
||||||
|
type ScoredEntry struct {
|
||||||
|
URL string
|
||||||
|
Name string
|
||||||
|
Status Status
|
||||||
|
Stars int
|
||||||
|
Forks int
|
||||||
|
HasLicense bool
|
||||||
|
LastPush time.Time
|
||||||
|
Category string
|
||||||
|
Description string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReportSummary contains grouped status counts.
|
||||||
|
type ReportSummary struct {
|
||||||
|
Healthy int `json:"healthy"`
|
||||||
|
Inactive int `json:"inactive"`
|
||||||
|
Stale int `json:"stale"`
|
||||||
|
Archived int `json:"archived"`
|
||||||
|
Dead int `json:"dead"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReportData is the full machine-readable report model.
|
||||||
|
type ReportData struct {
|
||||||
|
GeneratedAt time.Time `json:"generated_at"`
|
||||||
|
Total int `json:"total"`
|
||||||
|
Summary ReportSummary `json:"summary"`
|
||||||
|
Entries []ScoredEntry `json:"entries"`
|
||||||
|
ByStatus map[Status][]ScoredEntry `json:"by_status"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Score computes the health status of a GitHub repo.
|
||||||
|
func Score(info checker.RepoInfo) Status {
|
||||||
|
if info.IsDisabled {
|
||||||
|
return StatusDead
|
||||||
|
}
|
||||||
|
if info.IsArchived {
|
||||||
|
return StatusArchived
|
||||||
|
}
|
||||||
|
|
||||||
|
twoYearsAgo := time.Now().AddDate(-2, 0, 0)
|
||||||
|
oneYearAgo := time.Now().AddDate(-1, 0, 0)
|
||||||
|
|
||||||
|
if info.PushedAt.Before(twoYearsAgo) {
|
||||||
|
return StatusStale
|
||||||
|
}
|
||||||
|
if info.PushedAt.Before(oneYearAgo) {
|
||||||
|
return StatusInactive
|
||||||
|
}
|
||||||
|
return StatusHealthy
|
||||||
|
}
|
||||||
|
|
||||||
|
// ScoreAll scores a batch of repo infos.
|
||||||
|
func ScoreAll(infos []checker.RepoInfo) []ScoredEntry {
|
||||||
|
results := make([]ScoredEntry, len(infos))
|
||||||
|
for i, info := range infos {
|
||||||
|
results[i] = ScoredEntry{
|
||||||
|
URL: info.URL,
|
||||||
|
Name: fmt.Sprintf("%s/%s", info.Owner, info.Name),
|
||||||
|
Status: Score(info),
|
||||||
|
Stars: info.Stars,
|
||||||
|
Forks: info.Forks,
|
||||||
|
HasLicense: info.HasLicense,
|
||||||
|
LastPush: info.PushedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return results
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToCacheEntries converts scored entries to cache format.
|
||||||
|
func ToCacheEntries(scored []ScoredEntry) []cache.HealthEntry {
|
||||||
|
entries := make([]cache.HealthEntry, len(scored))
|
||||||
|
now := time.Now().UTC()
|
||||||
|
for i, s := range scored {
|
||||||
|
entries[i] = cache.HealthEntry{
|
||||||
|
URL: s.URL,
|
||||||
|
Name: s.Name,
|
||||||
|
Status: string(s.Status),
|
||||||
|
Stars: s.Stars,
|
||||||
|
Forks: s.Forks,
|
||||||
|
HasLicense: s.HasLicense,
|
||||||
|
LastPush: s.LastPush,
|
||||||
|
CheckedAt: now,
|
||||||
|
Category: s.Category,
|
||||||
|
Description: s.Description,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return entries
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateReport produces a Markdown health report.
|
||||||
|
func GenerateReport(scored []ScoredEntry) string {
|
||||||
|
var b strings.Builder
|
||||||
|
|
||||||
|
data := BuildReportData(scored)
|
||||||
|
groups := data.ByStatus
|
||||||
|
|
||||||
|
fmt.Fprintf(&b, "# Health Report\n\n")
|
||||||
|
fmt.Fprintf(&b, "**Generated:** %s\n\n", data.GeneratedAt.Format(time.RFC3339))
|
||||||
|
fmt.Fprintf(&b, "**Total:** %d repositories\n\n", data.Total)
|
||||||
|
|
||||||
|
fmt.Fprintf(&b, "## Summary\n\n")
|
||||||
|
fmt.Fprintf(&b, "- Healthy: %d\n", data.Summary.Healthy)
|
||||||
|
fmt.Fprintf(&b, "- Inactive (1-2 years): %d\n", data.Summary.Inactive)
|
||||||
|
fmt.Fprintf(&b, "- Stale (2+ years): %d\n", data.Summary.Stale)
|
||||||
|
fmt.Fprintf(&b, "- Archived: %d\n", data.Summary.Archived)
|
||||||
|
fmt.Fprintf(&b, "- Dead: %d\n\n", data.Summary.Dead)
|
||||||
|
|
||||||
|
writeSection := func(title string, status Status) {
|
||||||
|
entries := groups[status]
|
||||||
|
if len(entries) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
fmt.Fprintf(&b, "## %s\n\n", title)
|
||||||
|
for _, e := range entries {
|
||||||
|
fmt.Fprintf(&b, "- [%s](%s) - Stars: %d - Last push: %s\n",
|
||||||
|
e.Name, e.URL, e.Stars, e.LastPush.Format("2006-01-02"))
|
||||||
|
}
|
||||||
|
b.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
writeSection("Archived (should mark :skull:)", StatusArchived)
|
||||||
|
writeSection("Stale (2+ years inactive)", StatusStale)
|
||||||
|
writeSection("Inactive (1-2 years)", StatusInactive)
|
||||||
|
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildReportData returns full report data for machine-readable and markdown rendering.
|
||||||
|
func BuildReportData(scored []ScoredEntry) ReportData {
|
||||||
|
groups := map[Status][]ScoredEntry{}
|
||||||
|
for _, s := range scored {
|
||||||
|
groups[s.Status] = append(groups[s.Status], s)
|
||||||
|
}
|
||||||
|
|
||||||
|
return ReportData{
|
||||||
|
GeneratedAt: time.Now().UTC(),
|
||||||
|
Total: len(scored),
|
||||||
|
Summary: ReportSummary{
|
||||||
|
Healthy: len(groups[StatusHealthy]),
|
||||||
|
Inactive: len(groups[StatusInactive]),
|
||||||
|
Stale: len(groups[StatusStale]),
|
||||||
|
Archived: len(groups[StatusArchived]),
|
||||||
|
Dead: len(groups[StatusDead]),
|
||||||
|
},
|
||||||
|
Entries: scored,
|
||||||
|
ByStatus: groups,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateJSONReport returns the full report as pretty-printed JSON.
|
||||||
|
func GenerateJSONReport(scored []ScoredEntry) ([]byte, error) {
|
||||||
|
data := BuildReportData(scored)
|
||||||
|
return json.MarshalIndent(data, "", " ")
|
||||||
|
}
|
||||||
164
internal/scorer/scorer_test.go
Normal file
164
internal/scorer/scorer_test.go
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
package scorer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/checker"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestScoreHealthy(t *testing.T) {
|
||||||
|
info := checker.RepoInfo{
|
||||||
|
PushedAt: time.Now().AddDate(0, -3, 0),
|
||||||
|
IsArchived: false,
|
||||||
|
Stars: 100,
|
||||||
|
HasLicense: true,
|
||||||
|
}
|
||||||
|
status := Score(info)
|
||||||
|
if status != StatusHealthy {
|
||||||
|
t.Errorf("status = %q, want %q", status, StatusHealthy)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestScoreInactive(t *testing.T) {
|
||||||
|
info := checker.RepoInfo{
|
||||||
|
PushedAt: time.Now().AddDate(-1, -6, 0),
|
||||||
|
IsArchived: false,
|
||||||
|
}
|
||||||
|
status := Score(info)
|
||||||
|
if status != StatusInactive {
|
||||||
|
t.Errorf("status = %q, want %q", status, StatusInactive)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestScoreStale(t *testing.T) {
|
||||||
|
info := checker.RepoInfo{
|
||||||
|
PushedAt: time.Now().AddDate(-3, 0, 0),
|
||||||
|
IsArchived: false,
|
||||||
|
}
|
||||||
|
status := Score(info)
|
||||||
|
if status != StatusStale {
|
||||||
|
t.Errorf("status = %q, want %q", status, StatusStale)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestScoreArchived(t *testing.T) {
|
||||||
|
info := checker.RepoInfo{
|
||||||
|
PushedAt: time.Now(),
|
||||||
|
IsArchived: true,
|
||||||
|
}
|
||||||
|
status := Score(info)
|
||||||
|
if status != StatusArchived {
|
||||||
|
t.Errorf("status = %q, want %q", status, StatusArchived)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestScoreDisabled(t *testing.T) {
|
||||||
|
info := checker.RepoInfo{
|
||||||
|
IsDisabled: true,
|
||||||
|
}
|
||||||
|
status := Score(info)
|
||||||
|
if status != StatusDead {
|
||||||
|
t.Errorf("status = %q, want %q", status, StatusDead)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateReport(t *testing.T) {
|
||||||
|
results := []ScoredEntry{
|
||||||
|
{URL: "https://github.com/a/a", Name: "a/a", Status: StatusHealthy, Stars: 100, LastPush: time.Now()},
|
||||||
|
{URL: "https://github.com/b/b", Name: "b/b", Status: StatusArchived, Stars: 50, LastPush: time.Now()},
|
||||||
|
{URL: "https://github.com/c/c", Name: "c/c", Status: StatusStale, Stars: 10, LastPush: time.Now().AddDate(-3, 0, 0)},
|
||||||
|
}
|
||||||
|
report := GenerateReport(results)
|
||||||
|
if !strings.Contains(report, "Healthy: 1") {
|
||||||
|
t.Error("report should contain 'Healthy: 1'")
|
||||||
|
}
|
||||||
|
if !strings.Contains(report, "Archived: 1") {
|
||||||
|
t.Error("report should contain 'Archived: 1'")
|
||||||
|
}
|
||||||
|
if !strings.Contains(report, "Stale") {
|
||||||
|
t.Error("report should contain 'Stale'")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateReportShowsAllEntries(t *testing.T) {
|
||||||
|
var results []ScoredEntry
|
||||||
|
for i := 0; i < 55; i++ {
|
||||||
|
results = append(results, ScoredEntry{
|
||||||
|
URL: fmt.Sprintf("https://github.com/stale/%d", i),
|
||||||
|
Name: fmt.Sprintf("stale/%d", i),
|
||||||
|
Status: StatusStale,
|
||||||
|
Stars: i,
|
||||||
|
LastPush: time.Now().AddDate(-3, 0, 0),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
report := GenerateReport(results)
|
||||||
|
if strings.Contains(report, "... and") {
|
||||||
|
t.Fatal("report should not be truncated")
|
||||||
|
}
|
||||||
|
if !strings.Contains(report, "stale/54") {
|
||||||
|
t.Fatal("report should contain all entries")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateJSONReport(t *testing.T) {
|
||||||
|
results := []ScoredEntry{
|
||||||
|
{
|
||||||
|
URL: "https://github.com/a/a",
|
||||||
|
Name: "a/a",
|
||||||
|
Status: StatusHealthy,
|
||||||
|
Stars: 100,
|
||||||
|
LastPush: time.Now(),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
URL: "https://github.com/b/b",
|
||||||
|
Name: "b/b",
|
||||||
|
Status: StatusStale,
|
||||||
|
Stars: 50,
|
||||||
|
LastPush: time.Now().AddDate(-3, 0, 0),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := GenerateJSONReport(results)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateJSONReport() error = %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var report ReportData
|
||||||
|
if err := json.Unmarshal(data, &report); err != nil {
|
||||||
|
t.Fatalf("json.Unmarshal() error = %v", err)
|
||||||
|
}
|
||||||
|
if report.Total != 2 {
|
||||||
|
t.Fatalf("report.Total = %d, want 2", report.Total)
|
||||||
|
}
|
||||||
|
if report.Summary.Healthy != 1 || report.Summary.Stale != 1 {
|
||||||
|
t.Fatalf("summary = %+v, want healthy=1 stale=1", report.Summary)
|
||||||
|
}
|
||||||
|
if len(report.Entries) != 2 {
|
||||||
|
t.Fatalf("len(report.Entries) = %d, want 2", len(report.Entries))
|
||||||
|
}
|
||||||
|
if len(report.ByStatus[StatusStale]) != 1 {
|
||||||
|
t.Fatalf("len(report.ByStatus[stale]) = %d, want 1", len(report.ByStatus[StatusStale]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestScoreAll(t *testing.T) {
|
||||||
|
infos := []checker.RepoInfo{
|
||||||
|
{Owner: "a", Name: "a", PushedAt: time.Now(), Stars: 10},
|
||||||
|
{Owner: "b", Name: "b", PushedAt: time.Now().AddDate(-3, 0, 0), Stars: 5},
|
||||||
|
}
|
||||||
|
scored := ScoreAll(infos)
|
||||||
|
if len(scored) != 2 {
|
||||||
|
t.Fatalf("scored = %d, want 2", len(scored))
|
||||||
|
}
|
||||||
|
if scored[0].Status != StatusHealthy {
|
||||||
|
t.Errorf("first = %q, want healthy", scored[0].Status)
|
||||||
|
}
|
||||||
|
if scored[1].Status != StatusStale {
|
||||||
|
t.Errorf("second = %q, want stale", scored[1].Status)
|
||||||
|
}
|
||||||
|
}
|
||||||
603
internal/tui/model.go
Normal file
603
internal/tui/model.go
Normal file
@@ -0,0 +1,603 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os/exec"
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
"unicode/utf8"
|
||||||
|
|
||||||
|
tea "charm.land/bubbletea/v2"
|
||||||
|
"charm.land/lipgloss/v2"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
)
|
||||||
|
|
||||||
|
type panel int
|
||||||
|
|
||||||
|
const (
|
||||||
|
panelTree panel = iota
|
||||||
|
panelList
|
||||||
|
)
|
||||||
|
|
||||||
|
const entryHeight = 5 // lines rendered per entry in the list panel
|
||||||
|
const scrollOff = 4 // minimum lines/entries kept visible above and below cursor
|
||||||
|
|
||||||
|
// Model is the top-level Bubbletea model.
|
||||||
|
type Model struct {
|
||||||
|
roots []*TreeNode
|
||||||
|
flatTree []FlatNode
|
||||||
|
|
||||||
|
activePanel panel
|
||||||
|
treeCursor int
|
||||||
|
treeOffset int
|
||||||
|
listCursor int
|
||||||
|
listOffset int
|
||||||
|
currentEntries []cache.HealthEntry
|
||||||
|
|
||||||
|
filtering bool
|
||||||
|
filterText string
|
||||||
|
|
||||||
|
width, height int
|
||||||
|
}
|
||||||
|
|
||||||
|
// New creates a new Model from health cache entries.
|
||||||
|
func New(entries []cache.HealthEntry) Model {
|
||||||
|
roots := BuildTree(entries)
|
||||||
|
// Expand first root by default
|
||||||
|
if len(roots) > 0 {
|
||||||
|
roots[0].Expanded = true
|
||||||
|
}
|
||||||
|
flat := FlattenVisible(roots)
|
||||||
|
|
||||||
|
m := Model{
|
||||||
|
roots: roots,
|
||||||
|
flatTree: flat,
|
||||||
|
}
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
return m
|
||||||
|
}
|
||||||
|
|
||||||
|
// Init returns an initial command.
|
||||||
|
func (m Model) Init() tea.Cmd {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update handles messages.
|
||||||
|
func (m Model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||||
|
switch msg := msg.(type) {
|
||||||
|
case tea.WindowSizeMsg:
|
||||||
|
m.width = msg.Width
|
||||||
|
m.height = msg.Height
|
||||||
|
return m, nil
|
||||||
|
|
||||||
|
case openURLMsg:
|
||||||
|
return m, nil
|
||||||
|
|
||||||
|
case tea.KeyPressMsg:
|
||||||
|
// Filter mode input
|
||||||
|
if m.filtering {
|
||||||
|
return m.handleFilterKey(msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
switch msg.String() {
|
||||||
|
case "q", "ctrl+c":
|
||||||
|
return m, tea.Quit
|
||||||
|
case "tab":
|
||||||
|
if m.activePanel == panelTree {
|
||||||
|
m.activePanel = panelList
|
||||||
|
} else {
|
||||||
|
m.activePanel = panelTree
|
||||||
|
}
|
||||||
|
case "/":
|
||||||
|
m.filtering = true
|
||||||
|
m.filterText = ""
|
||||||
|
default:
|
||||||
|
if m.activePanel == panelTree {
|
||||||
|
return m.handleTreeKey(msg)
|
||||||
|
}
|
||||||
|
return m.handleListKey(msg)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) handleFilterKey(msg tea.KeyPressMsg) (tea.Model, tea.Cmd) {
|
||||||
|
switch msg.String() {
|
||||||
|
case "esc":
|
||||||
|
m.filtering = false
|
||||||
|
m.filterText = ""
|
||||||
|
m.flatTree = FlattenVisible(m.roots)
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
case "enter":
|
||||||
|
m.filtering = false
|
||||||
|
case "backspace":
|
||||||
|
if len(m.filterText) > 0 {
|
||||||
|
m.filterText = m.filterText[:len(m.filterText)-1]
|
||||||
|
m.applyFilter()
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
r := msg.String()
|
||||||
|
if utf8.RuneCountInString(r) == 1 {
|
||||||
|
m.filterText += r
|
||||||
|
m.applyFilter()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Model) applyFilter() {
|
||||||
|
if m.filterText == "" {
|
||||||
|
m.flatTree = FlattenVisible(m.roots)
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
query := strings.ToLower(m.filterText)
|
||||||
|
var filtered []cache.HealthEntry
|
||||||
|
for _, root := range m.roots {
|
||||||
|
for _, e := range root.AllEntries() {
|
||||||
|
if strings.Contains(strings.ToLower(e.Name), query) ||
|
||||||
|
strings.Contains(strings.ToLower(e.Description), query) ||
|
||||||
|
strings.Contains(strings.ToLower(e.Category), query) {
|
||||||
|
filtered = append(filtered, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
m.currentEntries = filtered
|
||||||
|
m.listCursor = 0
|
||||||
|
m.listOffset = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) handleTreeKey(msg tea.KeyPressMsg) (tea.Model, tea.Cmd) {
|
||||||
|
switch msg.String() {
|
||||||
|
case "up", "k":
|
||||||
|
if m.treeCursor > 0 {
|
||||||
|
m.treeCursor--
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
}
|
||||||
|
case "down", "j":
|
||||||
|
if m.treeCursor < len(m.flatTree)-1 {
|
||||||
|
m.treeCursor++
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
}
|
||||||
|
case "enter", " ":
|
||||||
|
if m.treeCursor < len(m.flatTree) {
|
||||||
|
node := m.flatTree[m.treeCursor].Node
|
||||||
|
if node.HasChildren() {
|
||||||
|
node.Expanded = !node.Expanded
|
||||||
|
m.flatTree = FlattenVisible(m.roots)
|
||||||
|
if m.treeCursor >= len(m.flatTree) {
|
||||||
|
m.treeCursor = len(m.flatTree) - 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
}
|
||||||
|
case "ctrl+d", "pgdown":
|
||||||
|
half := m.treePanelHeight() / 2
|
||||||
|
if half < 1 {
|
||||||
|
half = 1
|
||||||
|
}
|
||||||
|
m.treeCursor += half
|
||||||
|
if m.treeCursor >= len(m.flatTree) {
|
||||||
|
m.treeCursor = len(m.flatTree) - 1
|
||||||
|
}
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
case "ctrl+u", "pgup":
|
||||||
|
half := m.treePanelHeight() / 2
|
||||||
|
if half < 1 {
|
||||||
|
half = 1
|
||||||
|
}
|
||||||
|
m.treeCursor -= half
|
||||||
|
if m.treeCursor < 0 {
|
||||||
|
m.treeCursor = 0
|
||||||
|
}
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
case "g", "home":
|
||||||
|
m.treeCursor = 0
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
case "G", "end":
|
||||||
|
m.treeCursor = len(m.flatTree) - 1
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
case "right", "l":
|
||||||
|
if m.treeCursor < len(m.flatTree) {
|
||||||
|
node := m.flatTree[m.treeCursor].Node
|
||||||
|
if node.HasChildren() && !node.Expanded {
|
||||||
|
node.Expanded = true
|
||||||
|
m.flatTree = FlattenVisible(m.roots)
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
} else {
|
||||||
|
m.activePanel = panelList
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case "left", "h":
|
||||||
|
if m.treeCursor < len(m.flatTree) {
|
||||||
|
node := m.flatTree[m.treeCursor].Node
|
||||||
|
if node.HasChildren() && node.Expanded {
|
||||||
|
node.Expanded = false
|
||||||
|
m.flatTree = FlattenVisible(m.roots)
|
||||||
|
m.adjustTreeScroll()
|
||||||
|
m.updateCurrentEntries()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Model) adjustTreeScroll() {
|
||||||
|
visible := m.treePanelHeight()
|
||||||
|
off := scrollOff
|
||||||
|
if off > visible/2 {
|
||||||
|
off = visible / 2
|
||||||
|
}
|
||||||
|
if m.treeCursor < m.treeOffset+off {
|
||||||
|
m.treeOffset = m.treeCursor - off
|
||||||
|
}
|
||||||
|
if m.treeCursor >= m.treeOffset+visible-off {
|
||||||
|
m.treeOffset = m.treeCursor - visible + off + 1
|
||||||
|
}
|
||||||
|
if m.treeOffset < 0 {
|
||||||
|
m.treeOffset = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) treePanelHeight() int {
|
||||||
|
h := m.height - 6 // header, footer, borders, title
|
||||||
|
if h < 1 {
|
||||||
|
h = 1
|
||||||
|
}
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) handleListKey(msg tea.KeyPressMsg) (tea.Model, tea.Cmd) {
|
||||||
|
switch msg.String() {
|
||||||
|
case "up", "k":
|
||||||
|
if m.listCursor > 0 {
|
||||||
|
m.listCursor--
|
||||||
|
m.adjustListScroll()
|
||||||
|
}
|
||||||
|
case "down", "j":
|
||||||
|
if m.listCursor < len(m.currentEntries)-1 {
|
||||||
|
m.listCursor++
|
||||||
|
m.adjustListScroll()
|
||||||
|
}
|
||||||
|
case "ctrl+d", "pgdown":
|
||||||
|
half := m.visibleListEntries() / 2
|
||||||
|
if half < 1 {
|
||||||
|
half = 1
|
||||||
|
}
|
||||||
|
m.listCursor += half
|
||||||
|
if m.listCursor >= len(m.currentEntries) {
|
||||||
|
m.listCursor = len(m.currentEntries) - 1
|
||||||
|
}
|
||||||
|
m.adjustListScroll()
|
||||||
|
case "ctrl+u", "pgup":
|
||||||
|
half := m.visibleListEntries() / 2
|
||||||
|
if half < 1 {
|
||||||
|
half = 1
|
||||||
|
}
|
||||||
|
m.listCursor -= half
|
||||||
|
if m.listCursor < 0 {
|
||||||
|
m.listCursor = 0
|
||||||
|
}
|
||||||
|
m.adjustListScroll()
|
||||||
|
case "g", "home":
|
||||||
|
m.listCursor = 0
|
||||||
|
m.adjustListScroll()
|
||||||
|
case "G", "end":
|
||||||
|
m.listCursor = len(m.currentEntries) - 1
|
||||||
|
m.adjustListScroll()
|
||||||
|
case "enter":
|
||||||
|
if m.listCursor < len(m.currentEntries) {
|
||||||
|
return m, openURL(m.currentEntries[m.listCursor].URL)
|
||||||
|
}
|
||||||
|
case "left", "h":
|
||||||
|
m.activePanel = panelTree
|
||||||
|
}
|
||||||
|
return m, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Model) updateCurrentEntries() {
|
||||||
|
if len(m.flatTree) == 0 {
|
||||||
|
m.currentEntries = nil
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if m.treeCursor >= len(m.flatTree) {
|
||||||
|
m.treeCursor = len(m.flatTree) - 1
|
||||||
|
}
|
||||||
|
node := m.flatTree[m.treeCursor].Node
|
||||||
|
m.currentEntries = node.AllEntries()
|
||||||
|
m.listCursor = 0
|
||||||
|
m.listOffset = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) visibleListEntries() int {
|
||||||
|
v := m.listPanelHeight() / entryHeight
|
||||||
|
if v < 1 {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Model) adjustListScroll() {
|
||||||
|
visible := m.visibleListEntries()
|
||||||
|
off := scrollOff
|
||||||
|
if off > visible/2 {
|
||||||
|
off = visible / 2
|
||||||
|
}
|
||||||
|
if m.listCursor < m.listOffset+off {
|
||||||
|
m.listOffset = m.listCursor - off
|
||||||
|
}
|
||||||
|
if m.listCursor >= m.listOffset+visible-off {
|
||||||
|
m.listOffset = m.listCursor - visible + off + 1
|
||||||
|
}
|
||||||
|
if m.listOffset < 0 {
|
||||||
|
m.listOffset = 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) listPanelHeight() int {
|
||||||
|
// height minus header, footer, borders
|
||||||
|
h := m.height - 4
|
||||||
|
if h < 1 {
|
||||||
|
h = 1
|
||||||
|
}
|
||||||
|
return h
|
||||||
|
}
|
||||||
|
|
||||||
|
// View renders the UI.
|
||||||
|
func (m Model) View() tea.View {
|
||||||
|
if m.width == 0 || m.height == 0 {
|
||||||
|
return tea.NewView("Loading...")
|
||||||
|
}
|
||||||
|
|
||||||
|
treeWidth := m.width*3/10 - 2 // 30% minus borders
|
||||||
|
listWidth := m.width - treeWidth - 6 // remaining minus borders/gaps
|
||||||
|
contentHeight := m.height - 3 // minus footer
|
||||||
|
|
||||||
|
if treeWidth < 10 {
|
||||||
|
treeWidth = 10
|
||||||
|
}
|
||||||
|
if listWidth < 20 {
|
||||||
|
listWidth = 20
|
||||||
|
}
|
||||||
|
if contentHeight < 3 {
|
||||||
|
contentHeight = 3
|
||||||
|
}
|
||||||
|
|
||||||
|
tree := m.renderTree(treeWidth, contentHeight)
|
||||||
|
list := m.renderList(listWidth, contentHeight)
|
||||||
|
|
||||||
|
// Apply border styles
|
||||||
|
treeBorder := inactiveBorderStyle
|
||||||
|
listBorder := inactiveBorderStyle
|
||||||
|
if m.activePanel == panelTree {
|
||||||
|
treeBorder = activeBorderStyle
|
||||||
|
} else {
|
||||||
|
listBorder = activeBorderStyle
|
||||||
|
}
|
||||||
|
|
||||||
|
treePanel := treeBorder.Width(treeWidth).Height(contentHeight).Render(tree)
|
||||||
|
listPanel := listBorder.Width(listWidth).Height(contentHeight).Render(list)
|
||||||
|
|
||||||
|
body := lipgloss.JoinHorizontal(lipgloss.Top, treePanel, listPanel)
|
||||||
|
|
||||||
|
footer := m.renderFooter()
|
||||||
|
|
||||||
|
content := lipgloss.JoinVertical(lipgloss.Left, body, footer)
|
||||||
|
|
||||||
|
v := tea.NewView(content)
|
||||||
|
v.AltScreen = true
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) renderTree(width, height int) string {
|
||||||
|
var b strings.Builder
|
||||||
|
|
||||||
|
title := headerStyle.Render("Categories")
|
||||||
|
b.WriteString(title)
|
||||||
|
b.WriteString("\n\n")
|
||||||
|
|
||||||
|
linesUsed := 2
|
||||||
|
end := m.treeOffset + height - 2
|
||||||
|
if end > len(m.flatTree) {
|
||||||
|
end = len(m.flatTree)
|
||||||
|
}
|
||||||
|
for i := m.treeOffset; i < end; i++ {
|
||||||
|
fn := m.flatTree[i]
|
||||||
|
if linesUsed >= height {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
indent := strings.Repeat(" ", fn.Depth)
|
||||||
|
icon := " "
|
||||||
|
if fn.Node.HasChildren() {
|
||||||
|
if fn.Node.Expanded {
|
||||||
|
icon = "▼ "
|
||||||
|
} else {
|
||||||
|
icon = "▶ "
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
count := fn.Node.TotalEntries()
|
||||||
|
label := fmt.Sprintf("%s%s%s (%d)", indent, icon, fn.Node.Name, count)
|
||||||
|
|
||||||
|
// Truncate to width
|
||||||
|
if len(label) > width {
|
||||||
|
label = label[:width-1] + "…"
|
||||||
|
}
|
||||||
|
|
||||||
|
if i == m.treeCursor {
|
||||||
|
label = treeSelectedStyle.Render(label)
|
||||||
|
} else {
|
||||||
|
label = treeNormalStyle.Render(label)
|
||||||
|
}
|
||||||
|
|
||||||
|
b.WriteString(label)
|
||||||
|
b.WriteString("\n")
|
||||||
|
linesUsed++
|
||||||
|
}
|
||||||
|
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) renderList(width, height int) string {
|
||||||
|
var b strings.Builder
|
||||||
|
|
||||||
|
// Title
|
||||||
|
title := "Resources"
|
||||||
|
if m.filtering && m.filterText != "" {
|
||||||
|
title = fmt.Sprintf("Resources (filter: %s)", m.filterText)
|
||||||
|
}
|
||||||
|
b.WriteString(headerStyle.Render(title))
|
||||||
|
b.WriteString("\n\n")
|
||||||
|
|
||||||
|
if len(m.currentEntries) == 0 {
|
||||||
|
b.WriteString(entryDescStyle.Render(" No entries"))
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
linesUsed := 2
|
||||||
|
|
||||||
|
visible := (height - 2) / entryHeight
|
||||||
|
if visible < 1 {
|
||||||
|
visible = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
start := m.listOffset
|
||||||
|
end := start + visible
|
||||||
|
if end > len(m.currentEntries) {
|
||||||
|
end = len(m.currentEntries)
|
||||||
|
}
|
||||||
|
|
||||||
|
for idx := start; idx < end; idx++ {
|
||||||
|
if linesUsed+entryHeight > height {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
e := m.currentEntries[idx]
|
||||||
|
selected := idx == m.listCursor
|
||||||
|
|
||||||
|
// Use a safe width that accounts for Unicode characters (★, ⑂)
|
||||||
|
// that some terminals render as 2 columns but lipgloss counts as 1.
|
||||||
|
safeWidth := width - 2
|
||||||
|
|
||||||
|
// Line 1: name + stars + forks
|
||||||
|
stats := fmt.Sprintf("★ %d", e.Stars)
|
||||||
|
if e.Forks > 0 {
|
||||||
|
stats += fmt.Sprintf(" ⑂ %d", e.Forks)
|
||||||
|
}
|
||||||
|
name := e.Name
|
||||||
|
statsW := lipgloss.Width(stats)
|
||||||
|
maxName := safeWidth - statsW - 2 // 2 for minimum gap
|
||||||
|
if maxName < 4 {
|
||||||
|
maxName = 4
|
||||||
|
}
|
||||||
|
if lipgloss.Width(name) > maxName {
|
||||||
|
name = truncateToWidth(name, maxName-1) + "…"
|
||||||
|
}
|
||||||
|
nameStr := entryNameStyle.Render(name)
|
||||||
|
statsStr := entryDescStyle.Render(stats)
|
||||||
|
padding := safeWidth - lipgloss.Width(nameStr) - lipgloss.Width(statsStr)
|
||||||
|
if padding < 1 {
|
||||||
|
padding = 1
|
||||||
|
}
|
||||||
|
line1 := nameStr + strings.Repeat(" ", padding) + statsStr
|
||||||
|
|
||||||
|
// Line 2: URL
|
||||||
|
url := e.URL
|
||||||
|
if lipgloss.Width(url) > safeWidth {
|
||||||
|
url = truncateToWidth(url, safeWidth-1) + "…"
|
||||||
|
}
|
||||||
|
line2 := entryURLStyle.Render(url)
|
||||||
|
|
||||||
|
// Line 3: description
|
||||||
|
desc := e.Description
|
||||||
|
if lipgloss.Width(desc) > safeWidth {
|
||||||
|
desc = truncateToWidth(desc, safeWidth-3) + "..."
|
||||||
|
}
|
||||||
|
line3 := entryDescStyle.Render(desc)
|
||||||
|
|
||||||
|
// Line 4: status + last push
|
||||||
|
statusStr := statusStyle(e.Status).Render(e.Status)
|
||||||
|
lastPush := ""
|
||||||
|
if !e.LastPush.IsZero() {
|
||||||
|
lastPush = fmt.Sprintf(" Last push: %s", e.LastPush.Format("2006-01-02"))
|
||||||
|
}
|
||||||
|
line4 := statusStr + entryDescStyle.Render(lastPush)
|
||||||
|
|
||||||
|
// Line 5: separator
|
||||||
|
sepWidth := safeWidth
|
||||||
|
if sepWidth < 1 {
|
||||||
|
sepWidth = 1
|
||||||
|
}
|
||||||
|
line5 := entryDescStyle.Render(strings.Repeat("─", sepWidth))
|
||||||
|
|
||||||
|
entry := fmt.Sprintf("%s\n%s\n%s\n%s\n%s", line1, line2, line3, line4, line5)
|
||||||
|
|
||||||
|
if selected && m.activePanel == panelList {
|
||||||
|
entry = entrySelectedStyle.Render(entry)
|
||||||
|
}
|
||||||
|
|
||||||
|
b.WriteString(entry)
|
||||||
|
b.WriteString("\n")
|
||||||
|
linesUsed += entryHeight
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scroll indicator
|
||||||
|
if len(m.currentEntries) > visible {
|
||||||
|
indicator := fmt.Sprintf(" %d-%d of %d", start+1, end, len(m.currentEntries))
|
||||||
|
b.WriteString(footerStyle.Render(indicator))
|
||||||
|
}
|
||||||
|
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Model) renderFooter() string {
|
||||||
|
if m.filtering {
|
||||||
|
return filterPromptStyle.Render("/") + entryDescStyle.Render(m.filterText+"█")
|
||||||
|
}
|
||||||
|
help := " Tab:switch j/k:nav PgDn/PgUp:page g/G:top/bottom Enter:expand/open /:filter q:quit"
|
||||||
|
return footerStyle.Render(help)
|
||||||
|
}
|
||||||
|
|
||||||
|
// openURLMsg is sent after attempting to open a URL.
|
||||||
|
type openURLMsg struct{ err error }
|
||||||
|
|
||||||
|
func openURL(url string) tea.Cmd {
|
||||||
|
return func() tea.Msg {
|
||||||
|
var cmd *exec.Cmd
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "darwin":
|
||||||
|
cmd = exec.Command("open", url)
|
||||||
|
case "windows":
|
||||||
|
cmd = exec.Command("cmd", "/c", "start", url)
|
||||||
|
default:
|
||||||
|
cmd = exec.Command("xdg-open", url)
|
||||||
|
}
|
||||||
|
return openURLMsg{err: cmd.Run()}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// truncateToWidth truncates s to at most maxWidth visible columns.
|
||||||
|
func truncateToWidth(s string, maxWidth int) string {
|
||||||
|
if maxWidth <= 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
w := 0
|
||||||
|
for i, r := range s {
|
||||||
|
rw := lipgloss.Width(string(r))
|
||||||
|
if w+rw > maxWidth {
|
||||||
|
return s[:i]
|
||||||
|
}
|
||||||
|
w += rw
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
59
internal/tui/styles.go
Normal file
59
internal/tui/styles.go
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import "charm.land/lipgloss/v2"
|
||||||
|
|
||||||
|
var (
|
||||||
|
// Panel borders
|
||||||
|
activeBorderStyle = lipgloss.NewStyle().
|
||||||
|
Border(lipgloss.RoundedBorder()).
|
||||||
|
BorderForeground(lipgloss.Color("#7D56F4"))
|
||||||
|
|
||||||
|
inactiveBorderStyle = lipgloss.NewStyle().
|
||||||
|
Border(lipgloss.RoundedBorder()).
|
||||||
|
BorderForeground(lipgloss.Color("#555555"))
|
||||||
|
|
||||||
|
// Tree styles
|
||||||
|
treeSelectedStyle = lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("#FF79C6")).Background(lipgloss.Color("#3B2D50"))
|
||||||
|
treeNormalStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#CCCCCC"))
|
||||||
|
|
||||||
|
// Entry styles
|
||||||
|
entryNameStyle = lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("#50FA7B"))
|
||||||
|
entryURLStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#888888")).Italic(true)
|
||||||
|
entryDescStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#CCCCCC"))
|
||||||
|
|
||||||
|
// Status badge styles
|
||||||
|
statusHealthyStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#50FA7B")).Bold(true)
|
||||||
|
statusInactiveStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#FFB86C"))
|
||||||
|
statusStaleStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#F1FA8C"))
|
||||||
|
statusArchivedStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#FF5555")).Bold(true)
|
||||||
|
statusDeadStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#666666")).Strikethrough(true)
|
||||||
|
|
||||||
|
// Selected entry
|
||||||
|
entrySelectedStyle = lipgloss.NewStyle().Background(lipgloss.Color("#44475A"))
|
||||||
|
|
||||||
|
// Header
|
||||||
|
headerStyle = lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("#BD93F9")).Padding(0, 1)
|
||||||
|
|
||||||
|
// Footer
|
||||||
|
footerStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#666666"))
|
||||||
|
|
||||||
|
// Filter
|
||||||
|
filterPromptStyle = lipgloss.NewStyle().Foreground(lipgloss.Color("#FF79C6")).Bold(true)
|
||||||
|
)
|
||||||
|
|
||||||
|
func statusStyle(status string) lipgloss.Style {
|
||||||
|
switch status {
|
||||||
|
case "healthy":
|
||||||
|
return statusHealthyStyle
|
||||||
|
case "inactive":
|
||||||
|
return statusInactiveStyle
|
||||||
|
case "stale":
|
||||||
|
return statusStaleStyle
|
||||||
|
case "archived":
|
||||||
|
return statusArchivedStyle
|
||||||
|
case "dead":
|
||||||
|
return statusDeadStyle
|
||||||
|
default:
|
||||||
|
return lipgloss.NewStyle()
|
||||||
|
}
|
||||||
|
}
|
||||||
118
internal/tui/tree.go
Normal file
118
internal/tui/tree.go
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import (
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TreeNode represents a node in the category tree.
|
||||||
|
type TreeNode struct {
|
||||||
|
Name string // display name (leaf segment, e.g. "Networking")
|
||||||
|
Path string // full path (e.g. "Container Operations > Networking")
|
||||||
|
Children []*TreeNode
|
||||||
|
Expanded bool
|
||||||
|
Entries []cache.HealthEntry
|
||||||
|
}
|
||||||
|
|
||||||
|
// FlatNode is a visible tree node with its indentation depth.
|
||||||
|
type FlatNode struct {
|
||||||
|
Node *TreeNode
|
||||||
|
Depth int
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasChildren returns true if this node has child categories.
|
||||||
|
func (n *TreeNode) HasChildren() bool {
|
||||||
|
return len(n.Children) > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// TotalEntries returns the count of entries in this node and all descendants.
|
||||||
|
func (n *TreeNode) TotalEntries() int {
|
||||||
|
count := len(n.Entries)
|
||||||
|
for _, c := range n.Children {
|
||||||
|
count += c.TotalEntries()
|
||||||
|
}
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllEntries returns entries from this node and all descendants.
|
||||||
|
func (n *TreeNode) AllEntries() []cache.HealthEntry {
|
||||||
|
result := make([]cache.HealthEntry, 0, n.TotalEntries())
|
||||||
|
result = append(result, n.Entries...)
|
||||||
|
for _, c := range n.Children {
|
||||||
|
result = append(result, c.AllEntries()...)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildTree constructs a tree from flat HealthEntry slice, grouping by Category.
|
||||||
|
func BuildTree(entries []cache.HealthEntry) []*TreeNode {
|
||||||
|
root := &TreeNode{Name: "root"}
|
||||||
|
nodeMap := map[string]*TreeNode{}
|
||||||
|
|
||||||
|
for _, e := range entries {
|
||||||
|
cat := e.Category
|
||||||
|
if cat == "" {
|
||||||
|
cat = "Uncategorized"
|
||||||
|
}
|
||||||
|
|
||||||
|
node := ensureNode(root, nodeMap, cat)
|
||||||
|
node.Entries = append(node.Entries, e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort children at every level
|
||||||
|
sortTree(root)
|
||||||
|
return root.Children
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureNode(root *TreeNode, nodeMap map[string]*TreeNode, path string) *TreeNode {
|
||||||
|
if n, ok := nodeMap[path]; ok {
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
parts := strings.Split(path, " > ")
|
||||||
|
current := root
|
||||||
|
for i, part := range parts {
|
||||||
|
subpath := strings.Join(parts[:i+1], " > ")
|
||||||
|
if n, ok := nodeMap[subpath]; ok {
|
||||||
|
current = n
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
child := &TreeNode{
|
||||||
|
Name: part,
|
||||||
|
Path: subpath,
|
||||||
|
}
|
||||||
|
current.Children = append(current.Children, child)
|
||||||
|
nodeMap[subpath] = child
|
||||||
|
current = child
|
||||||
|
}
|
||||||
|
return current
|
||||||
|
}
|
||||||
|
|
||||||
|
func sortTree(node *TreeNode) {
|
||||||
|
sort.Slice(node.Children, func(i, j int) bool {
|
||||||
|
return node.Children[i].Name < node.Children[j].Name
|
||||||
|
})
|
||||||
|
for _, c := range node.Children {
|
||||||
|
sortTree(c)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// FlattenVisible returns visible nodes in depth-first order for rendering.
|
||||||
|
func FlattenVisible(roots []*TreeNode) []FlatNode {
|
||||||
|
var result []FlatNode
|
||||||
|
for _, r := range roots {
|
||||||
|
flattenNode(r, 0, &result)
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenNode(node *TreeNode, depth int, result *[]FlatNode) {
|
||||||
|
*result = append(*result, FlatNode{Node: node, Depth: depth})
|
||||||
|
if node.Expanded {
|
||||||
|
for _, c := range node.Children {
|
||||||
|
flattenNode(c, depth+1, result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
109
internal/tui/tree_test.go
Normal file
109
internal/tui/tree_test.go
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestBuildTree(t *testing.T) {
|
||||||
|
entries := []cache.HealthEntry{
|
||||||
|
{URL: "https://github.com/a/b", Name: "a/b", Category: "Projects > Networking", Description: "desc1"},
|
||||||
|
{URL: "https://github.com/c/d", Name: "c/d", Category: "Projects > Networking", Description: "desc2"},
|
||||||
|
{URL: "https://github.com/e/f", Name: "e/f", Category: "Projects > Security", Description: "desc3"},
|
||||||
|
{URL: "https://github.com/g/h", Name: "g/h", Category: "Docker Images > Base Tools", Description: "desc4"},
|
||||||
|
{URL: "https://github.com/i/j", Name: "i/j", Category: "", Description: "no category"},
|
||||||
|
}
|
||||||
|
|
||||||
|
roots := BuildTree(entries)
|
||||||
|
|
||||||
|
// Should have 3 roots: Docker Images, Projects, Uncategorized (sorted)
|
||||||
|
if len(roots) != 3 {
|
||||||
|
t.Fatalf("expected 3 roots, got %d", len(roots))
|
||||||
|
}
|
||||||
|
|
||||||
|
if roots[0].Name != "Docker Images" {
|
||||||
|
t.Errorf("expected first root 'Docker Images', got %q", roots[0].Name)
|
||||||
|
}
|
||||||
|
if roots[1].Name != "Projects" {
|
||||||
|
t.Errorf("expected second root 'Projects', got %q", roots[1].Name)
|
||||||
|
}
|
||||||
|
if roots[2].Name != "Uncategorized" {
|
||||||
|
t.Errorf("expected third root 'Uncategorized', got %q", roots[2].Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Projects > Networking should have 2 entries
|
||||||
|
projects := roots[1]
|
||||||
|
if len(projects.Children) != 2 {
|
||||||
|
t.Fatalf("expected 2 children under Projects, got %d", len(projects.Children))
|
||||||
|
}
|
||||||
|
networking := projects.Children[0] // Networking < Security alphabetically
|
||||||
|
if networking.Name != "Networking" {
|
||||||
|
t.Errorf("expected 'Networking', got %q", networking.Name)
|
||||||
|
}
|
||||||
|
if len(networking.Entries) != 2 {
|
||||||
|
t.Errorf("expected 2 entries in Networking, got %d", len(networking.Entries))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBuildTreeEmpty(t *testing.T) {
|
||||||
|
roots := BuildTree(nil)
|
||||||
|
if len(roots) != 0 {
|
||||||
|
t.Errorf("expected 0 roots for nil input, got %d", len(roots))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTotalEntries(t *testing.T) {
|
||||||
|
entries := []cache.HealthEntry{
|
||||||
|
{URL: "https://a", Category: "A > B"},
|
||||||
|
{URL: "https://b", Category: "A > B"},
|
||||||
|
{URL: "https://c", Category: "A > C"},
|
||||||
|
{URL: "https://d", Category: "A"},
|
||||||
|
}
|
||||||
|
roots := BuildTree(entries)
|
||||||
|
if len(roots) != 1 {
|
||||||
|
t.Fatalf("expected 1 root, got %d", len(roots))
|
||||||
|
}
|
||||||
|
if roots[0].TotalEntries() != 4 {
|
||||||
|
t.Errorf("expected 4 total entries, got %d", roots[0].TotalEntries())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFlattenVisible(t *testing.T) {
|
||||||
|
entries := []cache.HealthEntry{
|
||||||
|
{URL: "https://a", Category: "A > B"},
|
||||||
|
{URL: "https://b", Category: "A > C"},
|
||||||
|
}
|
||||||
|
roots := BuildTree(entries)
|
||||||
|
|
||||||
|
// Initially not expanded, should see just root
|
||||||
|
flat := FlattenVisible(roots)
|
||||||
|
if len(flat) != 1 {
|
||||||
|
t.Fatalf("expected 1 visible node (collapsed), got %d", len(flat))
|
||||||
|
}
|
||||||
|
if flat[0].Depth != 0 {
|
||||||
|
t.Errorf("expected depth 0, got %d", flat[0].Depth)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expand root
|
||||||
|
roots[0].Expanded = true
|
||||||
|
flat = FlattenVisible(roots)
|
||||||
|
if len(flat) != 3 {
|
||||||
|
t.Fatalf("expected 3 visible nodes (expanded), got %d", len(flat))
|
||||||
|
}
|
||||||
|
if flat[1].Depth != 1 {
|
||||||
|
t.Errorf("expected depth 1 for child, got %d", flat[1].Depth)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAllEntries(t *testing.T) {
|
||||||
|
entries := []cache.HealthEntry{
|
||||||
|
{URL: "https://a", Category: "A > B"},
|
||||||
|
{URL: "https://b", Category: "A"},
|
||||||
|
}
|
||||||
|
roots := BuildTree(entries)
|
||||||
|
all := roots[0].AllEntries()
|
||||||
|
if len(all) != 2 {
|
||||||
|
t.Errorf("expected 2 entries from AllEntries, got %d", len(all))
|
||||||
|
}
|
||||||
|
}
|
||||||
14
internal/tui/tui.go
Normal file
14
internal/tui/tui.go
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
package tui
|
||||||
|
|
||||||
|
import (
|
||||||
|
tea "charm.land/bubbletea/v2"
|
||||||
|
"github.com/veggiemonk/awesome-docker/internal/cache"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Run launches the TUI browser. It blocks until the user quits.
|
||||||
|
func Run(entries []cache.HealthEntry) error {
|
||||||
|
m := New(entries)
|
||||||
|
p := tea.NewProgram(m)
|
||||||
|
_, err := p.Run()
|
||||||
|
return err
|
||||||
|
}
|
||||||
973
package-lock.json
generated
973
package-lock.json
generated
@@ -1,973 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "awesome-docker-website",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"lockfileVersion": 3,
|
|
||||||
"requires": true,
|
|
||||||
"packages": {
|
|
||||||
"": {
|
|
||||||
"name": "awesome-docker-website",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"license": "Apache-2.0",
|
|
||||||
"dependencies": {
|
|
||||||
"cheerio": "1.1.2",
|
|
||||||
"draftlog": "1.0.13",
|
|
||||||
"fs-extra": "11.3.2",
|
|
||||||
"node-fetch": "3.3.2",
|
|
||||||
"rimraf": "6.0.1",
|
|
||||||
"showdown": "^2.1.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@isaacs/balanced-match": {
|
|
||||||
"version": "4.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/@isaacs/balanced-match/-/balanced-match-4.0.1.tgz",
|
|
||||||
"integrity": "sha512-yzMTt9lEb8Gv7zRioUilSglI0c0smZ9k5D65677DLWLtWJaXIS3CqcGyUFByYKlnUj6TkjLVs54fBl6+TiGQDQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@isaacs/brace-expansion": {
|
|
||||||
"version": "5.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/@isaacs/brace-expansion/-/brace-expansion-5.0.0.tgz",
|
|
||||||
"integrity": "sha512-ZT55BDLV0yv0RBm2czMiZ+SqCGO7AvmOM3G/w2xhVPH+te0aKgFjmBvGlL1dH+ql2tgGO3MVrbb3jCKyvpgnxA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"@isaacs/balanced-match": "^4.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/@isaacs/cliui": {
|
|
||||||
"version": "8.0.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz",
|
|
||||||
"integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==",
|
|
||||||
"license": "ISC",
|
|
||||||
"dependencies": {
|
|
||||||
"string-width": "^5.1.2",
|
|
||||||
"string-width-cjs": "npm:string-width@^4.2.0",
|
|
||||||
"strip-ansi": "^7.0.1",
|
|
||||||
"strip-ansi-cjs": "npm:strip-ansi@^6.0.1",
|
|
||||||
"wrap-ansi": "^8.1.0",
|
|
||||||
"wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/ansi-regex": {
|
|
||||||
"version": "6.2.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz",
|
|
||||||
"integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/chalk/ansi-regex?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/ansi-styles": {
|
|
||||||
"version": "6.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz",
|
|
||||||
"integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/boolbase": {
|
|
||||||
"version": "1.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/boolbase/-/boolbase-1.0.0.tgz",
|
|
||||||
"integrity": "sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww==",
|
|
||||||
"license": "ISC"
|
|
||||||
},
|
|
||||||
"node_modules/cheerio": {
|
|
||||||
"version": "1.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/cheerio/-/cheerio-1.1.2.tgz",
|
|
||||||
"integrity": "sha512-IkxPpb5rS/d1IiLbHMgfPuS0FgiWTtFIm/Nj+2woXDLTZ7fOT2eqzgYbdMlLweqlHbsZjxEChoVK+7iph7jyQg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"cheerio-select": "^2.1.0",
|
|
||||||
"dom-serializer": "^2.0.0",
|
|
||||||
"domhandler": "^5.0.3",
|
|
||||||
"domutils": "^3.2.2",
|
|
||||||
"encoding-sniffer": "^0.2.1",
|
|
||||||
"htmlparser2": "^10.0.0",
|
|
||||||
"parse5": "^7.3.0",
|
|
||||||
"parse5-htmlparser2-tree-adapter": "^7.1.0",
|
|
||||||
"parse5-parser-stream": "^7.1.2",
|
|
||||||
"undici": "^7.12.0",
|
|
||||||
"whatwg-mimetype": "^4.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=20.18.1"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/cheeriojs/cheerio?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/cheerio-select": {
|
|
||||||
"version": "2.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/cheerio-select/-/cheerio-select-2.1.0.tgz",
|
|
||||||
"integrity": "sha512-9v9kG0LvzrlcungtnJtpGNxY+fzECQKhK4EGJX2vByejiMX84MFNQw4UxPJl3bFbTMw+Dfs37XaIkCwTZfLh4g==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"dependencies": {
|
|
||||||
"boolbase": "^1.0.0",
|
|
||||||
"css-select": "^5.1.0",
|
|
||||||
"css-what": "^6.1.0",
|
|
||||||
"domelementtype": "^2.3.0",
|
|
||||||
"domhandler": "^5.0.3",
|
|
||||||
"domutils": "^3.0.1"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/fb55"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/color-convert": {
|
|
||||||
"version": "2.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
|
|
||||||
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"color-name": "~1.1.4"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=7.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/color-name": {
|
|
||||||
"version": "1.1.4",
|
|
||||||
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
|
|
||||||
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/commander": {
|
|
||||||
"version": "9.5.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/commander/-/commander-9.5.0.tgz",
|
|
||||||
"integrity": "sha512-KRs7WVDKg86PWiuAqhDrAQnTXZKraVcCc6vFdL14qrZ/DcWwuRo7VoiYXalXO7S5GKpqYiVEwCbgFDfxNHKJBQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": "^12.20.0 || >=14"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/cross-spawn": {
|
|
||||||
"version": "7.0.6",
|
|
||||||
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
|
|
||||||
"integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"path-key": "^3.1.0",
|
|
||||||
"shebang-command": "^2.0.0",
|
|
||||||
"which": "^2.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/css-select": {
|
|
||||||
"version": "5.2.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/css-select/-/css-select-5.2.2.tgz",
|
|
||||||
"integrity": "sha512-TizTzUddG/xYLA3NXodFM0fSbNizXjOKhqiQQwvhlspadZokn1KDy0NZFS0wuEubIYAV5/c1/lAr0TaaFXEXzw==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"dependencies": {
|
|
||||||
"boolbase": "^1.0.0",
|
|
||||||
"css-what": "^6.1.0",
|
|
||||||
"domhandler": "^5.0.2",
|
|
||||||
"domutils": "^3.0.1",
|
|
||||||
"nth-check": "^2.0.1"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/fb55"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/css-what": {
|
|
||||||
"version": "6.2.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/css-what/-/css-what-6.2.2.tgz",
|
|
||||||
"integrity": "sha512-u/O3vwbptzhMs3L1fQE82ZSLHQQfto5gyZzwteVIEyeaY5Fc7R4dapF/BvRoSYFeqfBk4m0V1Vafq5Pjv25wvA==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 6"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/fb55"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/data-uri-to-buffer": {
|
|
||||||
"version": "4.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz",
|
|
||||||
"integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 12"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/dom-serializer": {
|
|
||||||
"version": "2.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-2.0.0.tgz",
|
|
||||||
"integrity": "sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"domelementtype": "^2.3.0",
|
|
||||||
"domhandler": "^5.0.2",
|
|
||||||
"entities": "^4.2.0"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/cheeriojs/dom-serializer?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/domelementtype": {
|
|
||||||
"version": "2.3.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-2.3.0.tgz",
|
|
||||||
"integrity": "sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/fb55"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "BSD-2-Clause"
|
|
||||||
},
|
|
||||||
"node_modules/domhandler": {
|
|
||||||
"version": "5.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/domhandler/-/domhandler-5.0.3.tgz",
|
|
||||||
"integrity": "sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"dependencies": {
|
|
||||||
"domelementtype": "^2.3.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 4"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/domhandler?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/domutils": {
|
|
||||||
"version": "3.2.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/domutils/-/domutils-3.2.2.tgz",
|
|
||||||
"integrity": "sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"dependencies": {
|
|
||||||
"dom-serializer": "^2.0.0",
|
|
||||||
"domelementtype": "^2.3.0",
|
|
||||||
"domhandler": "^5.0.3"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/domutils?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/draftlog": {
|
|
||||||
"version": "1.0.13",
|
|
||||||
"resolved": "https://registry.npmjs.org/draftlog/-/draftlog-1.0.13.tgz",
|
|
||||||
"integrity": "sha512-GeMWOpXERBpfVDK6v7m0x1hPg8+g8ZsZWqJl2T17wHqrm4h8fnjiZmXcnCrmwogAc6R3YTxFXax15wezfuyCUw==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/eastasianwidth": {
|
|
||||||
"version": "0.2.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz",
|
|
||||||
"integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/emoji-regex": {
|
|
||||||
"version": "9.2.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz",
|
|
||||||
"integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/encoding-sniffer": {
|
|
||||||
"version": "0.2.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/encoding-sniffer/-/encoding-sniffer-0.2.1.tgz",
|
|
||||||
"integrity": "sha512-5gvq20T6vfpekVtqrYQsSCFZ1wEg5+wW0/QaZMWkFr6BqD3NfKs0rLCx4rrVlSWJeZb5NBJgVLswK/w2MWU+Gw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"iconv-lite": "^0.6.3",
|
|
||||||
"whatwg-encoding": "^3.1.1"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/encoding-sniffer?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/entities": {
|
|
||||||
"version": "4.5.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz",
|
|
||||||
"integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/entities?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/fetch-blob": {
|
|
||||||
"version": "3.2.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz",
|
|
||||||
"integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/jimmywarting"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "paypal",
|
|
||||||
"url": "https://paypal.me/jimmywarting"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"node-domexception": "^1.0.0",
|
|
||||||
"web-streams-polyfill": "^3.0.3"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "^12.20 || >= 14.13"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/foreground-child": {
|
|
||||||
"version": "3.3.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/foreground-child/-/foreground-child-3.3.1.tgz",
|
|
||||||
"integrity": "sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==",
|
|
||||||
"license": "ISC",
|
|
||||||
"dependencies": {
|
|
||||||
"cross-spawn": "^7.0.6",
|
|
||||||
"signal-exit": "^4.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=14"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/formdata-polyfill": {
|
|
||||||
"version": "4.0.10",
|
|
||||||
"resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
|
|
||||||
"integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"fetch-blob": "^3.1.2"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12.20.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/fs-extra": {
|
|
||||||
"version": "11.3.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-11.3.2.tgz",
|
|
||||||
"integrity": "sha512-Xr9F6z6up6Ws+NjzMCZc6WXg2YFRlrLP9NQDO3VQrWrfiojdhS56TzueT88ze0uBdCTwEIhQ3ptnmKeWGFAe0A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"graceful-fs": "^4.2.0",
|
|
||||||
"jsonfile": "^6.0.1",
|
|
||||||
"universalify": "^2.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=14.14"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/glob": {
|
|
||||||
"version": "11.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/glob/-/glob-11.0.3.tgz",
|
|
||||||
"integrity": "sha512-2Nim7dha1KVkaiF4q6Dj+ngPPMdfvLJEOpZk/jKiUAkqKebpGAWQXAq9z1xu9HKu5lWfqw/FASuccEjyznjPaA==",
|
|
||||||
"license": "ISC",
|
|
||||||
"dependencies": {
|
|
||||||
"foreground-child": "^3.3.1",
|
|
||||||
"jackspeak": "^4.1.1",
|
|
||||||
"minimatch": "^10.0.3",
|
|
||||||
"minipass": "^7.1.2",
|
|
||||||
"package-json-from-dist": "^1.0.0",
|
|
||||||
"path-scurry": "^2.0.0"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"glob": "dist/esm/bin.mjs"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/graceful-fs": {
|
|
||||||
"version": "4.2.11",
|
|
||||||
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
|
|
||||||
"integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==",
|
|
||||||
"license": "ISC"
|
|
||||||
},
|
|
||||||
"node_modules/htmlparser2": {
|
|
||||||
"version": "10.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-10.0.0.tgz",
|
|
||||||
"integrity": "sha512-TwAZM+zE5Tq3lrEHvOlvwgj1XLWQCtaaibSN11Q+gGBAS7Y1uZSWwXXRe4iF6OXnaq1riyQAPFOBtYc77Mxq0g==",
|
|
||||||
"funding": [
|
|
||||||
"https://github.com/fb55/htmlparser2?sponsor=1",
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/fb55"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"domelementtype": "^2.3.0",
|
|
||||||
"domhandler": "^5.0.3",
|
|
||||||
"domutils": "^3.2.1",
|
|
||||||
"entities": "^6.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/htmlparser2/node_modules/entities": {
|
|
||||||
"version": "6.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz",
|
|
||||||
"integrity": "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/entities?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/iconv-lite": {
|
|
||||||
"version": "0.6.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz",
|
|
||||||
"integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"safer-buffer": ">= 2.1.2 < 3.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.10.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/is-fullwidth-code-point": {
|
|
||||||
"version": "3.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
|
|
||||||
"integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/isexe": {
|
|
||||||
"version": "2.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
|
|
||||||
"integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==",
|
|
||||||
"license": "ISC"
|
|
||||||
},
|
|
||||||
"node_modules/jackspeak": {
|
|
||||||
"version": "4.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/jackspeak/-/jackspeak-4.1.1.tgz",
|
|
||||||
"integrity": "sha512-zptv57P3GpL+O0I7VdMJNBZCu+BPHVQUk55Ft8/QCJjTVxrnJHuVuX/0Bl2A6/+2oyR/ZMEuFKwmzqqZ/U5nPQ==",
|
|
||||||
"license": "BlueOak-1.0.0",
|
|
||||||
"dependencies": {
|
|
||||||
"@isaacs/cliui": "^8.0.2"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/jsonfile": {
|
|
||||||
"version": "6.2.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.2.0.tgz",
|
|
||||||
"integrity": "sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"universalify": "^2.0.0"
|
|
||||||
},
|
|
||||||
"optionalDependencies": {
|
|
||||||
"graceful-fs": "^4.1.6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/lru-cache": {
|
|
||||||
"version": "11.2.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-11.2.2.tgz",
|
|
||||||
"integrity": "sha512-F9ODfyqML2coTIsQpSkRHnLSZMtkU8Q+mSfcaIyKwy58u+8k5nvAYeiNhsyMARvzNcXJ9QfWVrcPsC9e9rAxtg==",
|
|
||||||
"license": "ISC",
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/minimatch": {
|
|
||||||
"version": "10.0.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.0.3.tgz",
|
|
||||||
"integrity": "sha512-IPZ167aShDZZUMdRk66cyQAW3qr0WzbHkPdMYa8bzZhlHhO3jALbKdxcaak7W9FfT2rZNpQuUu4Od7ILEpXSaw==",
|
|
||||||
"license": "ISC",
|
|
||||||
"dependencies": {
|
|
||||||
"@isaacs/brace-expansion": "^5.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/minipass": {
|
|
||||||
"version": "7.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/minipass/-/minipass-7.1.2.tgz",
|
|
||||||
"integrity": "sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==",
|
|
||||||
"license": "ISC",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=16 || 14 >=14.17"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/node-domexception": {
|
|
||||||
"version": "1.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
|
|
||||||
"integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
|
|
||||||
"deprecated": "Use your platform's native DOMException instead",
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://github.com/sponsors/jimmywarting"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "github",
|
|
||||||
"url": "https://paypal.me/jimmywarting"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=10.5.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/node-fetch": {
|
|
||||||
"version": "3.3.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz",
|
|
||||||
"integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"data-uri-to-buffer": "^4.0.0",
|
|
||||||
"fetch-blob": "^3.1.4",
|
|
||||||
"formdata-polyfill": "^4.0.10"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "^12.20.0 || ^14.13.1 || >=16.0.0"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"type": "opencollective",
|
|
||||||
"url": "https://opencollective.com/node-fetch"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/nth-check": {
|
|
||||||
"version": "2.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/nth-check/-/nth-check-2.1.1.tgz",
|
|
||||||
"integrity": "sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"dependencies": {
|
|
||||||
"boolbase": "^1.0.0"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/nth-check?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/package-json-from-dist": {
|
|
||||||
"version": "1.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz",
|
|
||||||
"integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==",
|
|
||||||
"license": "BlueOak-1.0.0"
|
|
||||||
},
|
|
||||||
"node_modules/parse5": {
|
|
||||||
"version": "7.3.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/parse5/-/parse5-7.3.0.tgz",
|
|
||||||
"integrity": "sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"entities": "^6.0.0"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/inikulin/parse5?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/parse5-htmlparser2-tree-adapter": {
|
|
||||||
"version": "7.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/parse5-htmlparser2-tree-adapter/-/parse5-htmlparser2-tree-adapter-7.1.0.tgz",
|
|
||||||
"integrity": "sha512-ruw5xyKs6lrpo9x9rCZqZZnIUntICjQAd0Wsmp396Ul9lN/h+ifgVV1x1gZHi8euej6wTfpqX8j+BFQxF0NS/g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"domhandler": "^5.0.3",
|
|
||||||
"parse5": "^7.0.0"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/inikulin/parse5?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/parse5-parser-stream": {
|
|
||||||
"version": "7.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/parse5-parser-stream/-/parse5-parser-stream-7.1.2.tgz",
|
|
||||||
"integrity": "sha512-JyeQc9iwFLn5TbvvqACIF/VXG6abODeB3Fwmv/TGdLk2LfbWkaySGY72at4+Ty7EkPZj854u4CrICqNk2qIbow==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"parse5": "^7.0.0"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/inikulin/parse5?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/parse5/node_modules/entities": {
|
|
||||||
"version": "6.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/entities/-/entities-6.0.1.tgz",
|
|
||||||
"integrity": "sha512-aN97NXWF6AWBTahfVOIrB/NShkzi5H7F9r1s9mD3cDj4Ko5f2qhhVoYMibXF7GlLveb/D2ioWay8lxI97Ven3g==",
|
|
||||||
"license": "BSD-2-Clause",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=0.12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/fb55/entities?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/path-key": {
|
|
||||||
"version": "3.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
|
|
||||||
"integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/path-scurry": {
|
|
||||||
"version": "2.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-2.0.0.tgz",
|
|
||||||
"integrity": "sha512-ypGJsmGtdXUOeM5u93TyeIEfEhM6s+ljAhrk5vAvSx8uyY/02OvrZnA0YNGUrPXfpJMgI1ODd3nwz8Npx4O4cg==",
|
|
||||||
"license": "BlueOak-1.0.0",
|
|
||||||
"dependencies": {
|
|
||||||
"lru-cache": "^11.0.0",
|
|
||||||
"minipass": "^7.1.2"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/rimraf": {
|
|
||||||
"version": "6.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/rimraf/-/rimraf-6.0.1.tgz",
|
|
||||||
"integrity": "sha512-9dkvaxAsk/xNXSJzMgFqqMCuFgt2+KsOFek3TMLfo8NCPfWpBmqwyNn5Y+NX56QUYfCtsyhF3ayiboEoUmJk/A==",
|
|
||||||
"license": "ISC",
|
|
||||||
"dependencies": {
|
|
||||||
"glob": "^11.0.0",
|
|
||||||
"package-json-from-dist": "^1.0.0"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"rimraf": "dist/esm/bin.mjs"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": "20 || >=22"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/safer-buffer": {
|
|
||||||
"version": "2.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
|
|
||||||
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/shebang-command": {
|
|
||||||
"version": "2.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
|
|
||||||
"integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"shebang-regex": "^3.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/shebang-regex": {
|
|
||||||
"version": "3.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
|
|
||||||
"integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/showdown": {
|
|
||||||
"version": "2.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/showdown/-/showdown-2.1.0.tgz",
|
|
||||||
"integrity": "sha512-/6NVYu4U819R2pUIk79n67SYgJHWCce0a5xTP979WbNp0FL9MN1I1QK662IDU1b6JzKTvmhgI7T7JYIxBi3kMQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"commander": "^9.0.0"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"showdown": "bin/showdown.js"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"type": "individual",
|
|
||||||
"url": "https://www.paypal.me/tiviesantos"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/signal-exit": {
|
|
||||||
"version": "4.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz",
|
|
||||||
"integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==",
|
|
||||||
"license": "ISC",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=14"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/isaacs"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/string-width": {
|
|
||||||
"version": "5.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz",
|
|
||||||
"integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"eastasianwidth": "^0.2.0",
|
|
||||||
"emoji-regex": "^9.2.2",
|
|
||||||
"strip-ansi": "^7.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/sponsors/sindresorhus"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/string-width-cjs": {
|
|
||||||
"name": "string-width",
|
|
||||||
"version": "4.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
|
||||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"emoji-regex": "^8.0.0",
|
|
||||||
"is-fullwidth-code-point": "^3.0.0",
|
|
||||||
"strip-ansi": "^6.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/string-width-cjs/node_modules/ansi-regex": {
|
|
||||||
"version": "5.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
|
||||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/string-width-cjs/node_modules/emoji-regex": {
|
|
||||||
"version": "8.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
|
||||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/string-width-cjs/node_modules/strip-ansi": {
|
|
||||||
"version": "6.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
|
||||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"ansi-regex": "^5.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/strip-ansi": {
|
|
||||||
"version": "7.1.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz",
|
|
||||||
"integrity": "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"ansi-regex": "^6.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/strip-ansi-cjs": {
|
|
||||||
"name": "strip-ansi",
|
|
||||||
"version": "6.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
|
||||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"ansi-regex": "^5.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/strip-ansi-cjs/node_modules/ansi-regex": {
|
|
||||||
"version": "5.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
|
||||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/undici": {
|
|
||||||
"version": "7.16.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/undici/-/undici-7.16.0.tgz",
|
|
||||||
"integrity": "sha512-QEg3HPMll0o3t2ourKwOeUAZ159Kn9mx5pnzHRQO8+Wixmh88YdZRiIwat0iNzNNXn0yoEtXJqFpyW7eM8BV7g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=20.18.1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/universalify": {
|
|
||||||
"version": "2.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz",
|
|
||||||
"integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 10.0.0"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/web-streams-polyfill": {
|
|
||||||
"version": "3.3.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz",
|
|
||||||
"integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/whatwg-encoding": {
|
|
||||||
"version": "3.1.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-3.1.1.tgz",
|
|
||||||
"integrity": "sha512-6qN4hJdMwfYBtE3YBTTHhoeuUrDBPZmbQaxWAqSALV/MeEnR5z1xd8UKud2RAkFoPkmB+hli1TZSnyi84xz1vQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"iconv-lite": "0.6.3"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=18"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/whatwg-mimetype": {
|
|
||||||
"version": "4.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/whatwg-mimetype/-/whatwg-mimetype-4.0.0.tgz",
|
|
||||||
"integrity": "sha512-QaKxh0eNIi2mE9p2vEdzfagOKHCcj1pJ56EEHGQOVxp8r9/iszLUUV7v89x9O1p/T+NlTM5W7jW6+cz4Fq1YVg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=18"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/which": {
|
|
||||||
"version": "2.0.2",
|
|
||||||
"resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
|
|
||||||
"integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
|
|
||||||
"license": "ISC",
|
|
||||||
"dependencies": {
|
|
||||||
"isexe": "^2.0.0"
|
|
||||||
},
|
|
||||||
"bin": {
|
|
||||||
"node-which": "bin/node-which"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">= 8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi": {
|
|
||||||
"version": "8.1.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-8.1.0.tgz",
|
|
||||||
"integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"ansi-styles": "^6.1.0",
|
|
||||||
"string-width": "^5.0.1",
|
|
||||||
"strip-ansi": "^7.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=12"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi-cjs": {
|
|
||||||
"name": "wrap-ansi",
|
|
||||||
"version": "7.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz",
|
|
||||||
"integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"ansi-styles": "^4.0.0",
|
|
||||||
"string-width": "^4.1.0",
|
|
||||||
"strip-ansi": "^6.0.0"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=10"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi-cjs/node_modules/ansi-regex": {
|
|
||||||
"version": "5.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
|
|
||||||
"integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
|
|
||||||
"license": "MIT",
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi-cjs/node_modules/ansi-styles": {
|
|
||||||
"version": "4.3.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
|
|
||||||
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"color-convert": "^2.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
},
|
|
||||||
"funding": {
|
|
||||||
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi-cjs/node_modules/emoji-regex": {
|
|
||||||
"version": "8.0.0",
|
|
||||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
|
||||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
|
||||||
"license": "MIT"
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi-cjs/node_modules/string-width": {
|
|
||||||
"version": "4.2.3",
|
|
||||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
|
||||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"emoji-regex": "^8.0.0",
|
|
||||||
"is-fullwidth-code-point": "^3.0.0",
|
|
||||||
"strip-ansi": "^6.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"node_modules/wrap-ansi-cjs/node_modules/strip-ansi": {
|
|
||||||
"version": "6.0.1",
|
|
||||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
|
||||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
|
||||||
"license": "MIT",
|
|
||||||
"dependencies": {
|
|
||||||
"ansi-regex": "^5.0.1"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=8"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
30
package.json
30
package.json
@@ -1,30 +0,0 @@
|
|||||||
{
|
|
||||||
"name": "awesome-docker-website",
|
|
||||||
"version": "1.0.0",
|
|
||||||
"description": "A curated list of Docker resources and projects Inspired by @sindresorhus and improved by amazing contributors",
|
|
||||||
"main": "build.js",
|
|
||||||
"scripts": {
|
|
||||||
"build": "rimraf ./dist/ && node build.js",
|
|
||||||
"test-pr": "node tests/pull_request.mjs",
|
|
||||||
"test": "node tests/test_all.mjs",
|
|
||||||
"health-check": "node tests/health_check.mjs"
|
|
||||||
},
|
|
||||||
"repository": {
|
|
||||||
"type": "git",
|
|
||||||
"url": "git+https://github.com/veggiemonk/awesome-docker.git"
|
|
||||||
},
|
|
||||||
"author": "Julien Bisconti <julien.bisconti at hotmail dot com>",
|
|
||||||
"license": "Apache-2.0",
|
|
||||||
"bugs": {
|
|
||||||
"url": "https://github.com/veggiemonk/awesome-docker/issues"
|
|
||||||
},
|
|
||||||
"homepage": "https://github.com/veggiemonk/awesome-docker#readme",
|
|
||||||
"dependencies": {
|
|
||||||
"cheerio": "1.1.2",
|
|
||||||
"draftlog": "1.0.13",
|
|
||||||
"fs-extra": "11.3.2",
|
|
||||||
"node-fetch": "3.3.2",
|
|
||||||
"rimraf": "6.0.1",
|
|
||||||
"showdown": "^2.1.0"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
108
tests/common.mjs
108
tests/common.mjs
@@ -1,108 +0,0 @@
|
|||||||
import fetch from 'node-fetch';
|
|
||||||
import { isRedirect } from 'node-fetch';
|
|
||||||
import {readFileSync} from 'fs';
|
|
||||||
|
|
||||||
const LINKS_OPTIONS = {
|
|
||||||
redirect: 'manual',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
'user-agent':
|
|
||||||
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36',
|
|
||||||
},
|
|
||||||
timeout: 60000, // 1m
|
|
||||||
signal: AbortSignal.timeout(60000),
|
|
||||||
};
|
|
||||||
|
|
||||||
const LOG = {
|
|
||||||
error: (...args) => console.error('❌ ERROR', args),
|
|
||||||
error_string: (...args) =>
|
|
||||||
console.error('❌ ERROR', JSON.stringify({ ...args }, null, ' ')),
|
|
||||||
debug: (...args) => {
|
|
||||||
if (process.env.DEBUG) console.log('>>> DEBUG: ', { ...args });
|
|
||||||
},
|
|
||||||
debug_string: (...args) => {
|
|
||||||
if (process.env.DEBUG)
|
|
||||||
console.log('>>> DEBUG: ', JSON.stringify({ ...args }, null, ' '));
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
const handleFailure = (error) => {
|
|
||||||
console.error(`${error.message}: ${error.stack}`, { error });
|
|
||||||
process.exit(1);
|
|
||||||
};
|
|
||||||
|
|
||||||
process.on('unhandledRejection', handleFailure);
|
|
||||||
|
|
||||||
const extract_all_links = (markdown) => {
|
|
||||||
// if you have a problem and you try to solve it with a regex,
|
|
||||||
// now you have two problems
|
|
||||||
// TODO: replace this mess with a mardown parser ?
|
|
||||||
const re = /(((https:(?:\/\/)?)(?:[-;:&=+$,\w]+@)?[A-Za-z0-9.-]+|(?:www\.|[-;:&=+$,\w]+@)[A-Za-z0-9.-]+)((?:\/[+~%/.\w\-_]*)?\??(?:[-+=&;%@.\w_]*)#?(?:[.!/@\-\\\w]*))?)/g;
|
|
||||||
return markdown.match(re);
|
|
||||||
};
|
|
||||||
|
|
||||||
const find_duplicates = (arr) => {
|
|
||||||
const hm = {};
|
|
||||||
const dup = [];
|
|
||||||
arr.forEach((e) => {
|
|
||||||
if (hm[e]) dup.push(e);
|
|
||||||
else hm[e] = true;
|
|
||||||
});
|
|
||||||
return dup;
|
|
||||||
};
|
|
||||||
|
|
||||||
const partition = (arr, func) => {
|
|
||||||
const ap = [[], []];
|
|
||||||
arr.forEach((e) => (func(e) ? ap[0].push(e) : ap[1].push(e)));
|
|
||||||
return ap;
|
|
||||||
};
|
|
||||||
|
|
||||||
async function fetch_link(url) {
|
|
||||||
try {
|
|
||||||
const { headers, ok, status, statusText } = await fetch(url, LINKS_OPTIONS);
|
|
||||||
const redirect = isRedirect(status) ? { redirect: { src: url, dst: headers.get("location") } } : {};
|
|
||||||
return [url, { ok, status: statusText, ...redirect }];
|
|
||||||
} catch (error) {
|
|
||||||
return [url, { ok: false, status: error.message }];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function batch_fetch({ arr, get, post_filter_func, BATCH_SIZE = 8 }) {
|
|
||||||
const result = [];
|
|
||||||
/* eslint-disable no-await-in-loop */
|
|
||||||
for (let i = 0; i < arr.length; i += BATCH_SIZE) {
|
|
||||||
const batch = arr.slice(i, i + BATCH_SIZE);
|
|
||||||
LOG.debug_string({ batch });
|
|
||||||
let res = await Promise.all(batch.map(get));
|
|
||||||
console.log(`batch fetched...${i + BATCH_SIZE}`);
|
|
||||||
res = post_filter_func ? res.filter(post_filter_func) : res;
|
|
||||||
LOG.debug_string({ res });
|
|
||||||
result.push(...res);
|
|
||||||
}
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = readFileSync('./tests/exclude_in_test.json')
|
|
||||||
const exclude = JSON.parse(data)
|
|
||||||
const exclude_length = exclude.length;
|
|
||||||
const exclude_from_list = (link) => {
|
|
||||||
let is_excluded = false;
|
|
||||||
for (let i = 0; i < exclude_length; i += 1) {
|
|
||||||
if (link.startsWith(exclude[i])) {
|
|
||||||
is_excluded = true;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return is_excluded;
|
|
||||||
};
|
|
||||||
|
|
||||||
export default {
|
|
||||||
LOG,
|
|
||||||
handleFailure,
|
|
||||||
extract_all_links,
|
|
||||||
find_duplicates,
|
|
||||||
partition,
|
|
||||||
fetch_link,
|
|
||||||
batch_fetch,
|
|
||||||
exclude_from_list,
|
|
||||||
};
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
[
|
|
||||||
"https://vimeo.com",
|
|
||||||
"https://travis-ci.org/veggiemonk/awesome-docker.svg",
|
|
||||||
"https://github.com/apps/",
|
|
||||||
"https://twitter.com",
|
|
||||||
"https://www.meetup.com/",
|
|
||||||
"https://cycle.io/",
|
|
||||||
"https://www.manning.com/",
|
|
||||||
"https://deepfence.io",
|
|
||||||
"https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg",
|
|
||||||
"https://www.se-radio.net/2017/05/se-radio-episode-290-diogo-monica-on-docker-security",
|
|
||||||
"https://www.reddit.com/r/docker/",
|
|
||||||
"https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615",
|
|
||||||
"https://www.youtube.com/playlist",
|
|
||||||
"https://www.aquasec.com",
|
|
||||||
"https://cloudsmith.com"
|
|
||||||
]
|
|
||||||
@@ -1,206 +0,0 @@
|
|||||||
import fs from 'fs-extra';
|
|
||||||
import fetch from 'node-fetch';
|
|
||||||
import helper from './common.mjs';
|
|
||||||
|
|
||||||
const README = 'README.md';
|
|
||||||
const GITHUB_GQL_API = 'https://api.github.com/graphql';
|
|
||||||
const TOKEN = process.env.GITHUB_TOKEN || '';
|
|
||||||
|
|
||||||
if (!TOKEN) {
|
|
||||||
console.error('GITHUB_TOKEN environment variable is required');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
const Authorization = `token ${TOKEN}`;
|
|
||||||
|
|
||||||
const LOG = {
|
|
||||||
info: (...args) => console.log('ℹ️ ', ...args),
|
|
||||||
warn: (...args) => console.warn('⚠️ ', ...args),
|
|
||||||
error: (...args) => console.error('❌', ...args),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Extract GitHub repos from links
|
|
||||||
const extract_repos = (arr) =>
|
|
||||||
arr
|
|
||||||
.map((e) => e.substr('https://github.com/'.length).split('/'))
|
|
||||||
.filter((r) => r.length === 2 && r[1] !== '');
|
|
||||||
|
|
||||||
// Generate GraphQL query to check repo health
|
|
||||||
const generate_health_query = (repos) => {
|
|
||||||
const repoQueries = repos.map(([owner, name]) => {
|
|
||||||
const safeName = `repo_${owner.replace(/(-|\.)/g, '_')}_${name.replace(/(-|\.)/g, '_')}`;
|
|
||||||
return `${safeName}: repository(owner: "${owner}", name:"${name}"){
|
|
||||||
nameWithOwner
|
|
||||||
isArchived
|
|
||||||
pushedAt
|
|
||||||
createdAt
|
|
||||||
stargazerCount
|
|
||||||
forkCount
|
|
||||||
isDisabled
|
|
||||||
isFork
|
|
||||||
isLocked
|
|
||||||
isPrivate
|
|
||||||
}`;
|
|
||||||
}).join('\n');
|
|
||||||
|
|
||||||
return `query REPO_HEALTH { ${repoQueries} }`;
|
|
||||||
};
|
|
||||||
|
|
||||||
// Batch repos into smaller chunks for GraphQL
|
|
||||||
function* batchRepos(repos, size = 50) {
|
|
||||||
for (let i = 0; i < repos.length; i += size) {
|
|
||||||
yield repos.slice(i, i + size);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function checkRepoHealth(repos) {
|
|
||||||
const results = {
|
|
||||||
archived: [],
|
|
||||||
stale: [], // No commits in 2+ years
|
|
||||||
inactive: [], // No commits in 1-2 years
|
|
||||||
healthy: [],
|
|
||||||
disabled: [],
|
|
||||||
total: repos.length,
|
|
||||||
};
|
|
||||||
|
|
||||||
const twoYearsAgo = new Date();
|
|
||||||
twoYearsAgo.setFullYear(twoYearsAgo.getFullYear() - 2);
|
|
||||||
|
|
||||||
const oneYearAgo = new Date();
|
|
||||||
oneYearAgo.setFullYear(oneYearAgo.getFullYear() - 1);
|
|
||||||
|
|
||||||
LOG.info(`Checking health of ${repos.length} repositories...`);
|
|
||||||
|
|
||||||
for (const batch of batchRepos(repos)) {
|
|
||||||
const query = generate_health_query(batch);
|
|
||||||
const options = {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
Authorization,
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
body: JSON.stringify({ query }),
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(GITHUB_GQL_API, options);
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (data.errors) {
|
|
||||||
LOG.error('GraphQL errors:', data.errors);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const [key, repo] of Object.entries(data.data)) {
|
|
||||||
if (!repo) continue;
|
|
||||||
|
|
||||||
const pushedAt = new Date(repo.pushedAt);
|
|
||||||
const repoInfo = {
|
|
||||||
name: repo.nameWithOwner,
|
|
||||||
pushedAt: repo.pushedAt,
|
|
||||||
stars: repo.stargazerCount,
|
|
||||||
url: `https://github.com/${repo.nameWithOwner}`,
|
|
||||||
};
|
|
||||||
|
|
||||||
if (repo.isArchived) {
|
|
||||||
results.archived.push(repoInfo);
|
|
||||||
} else if (repo.isDisabled) {
|
|
||||||
results.disabled.push(repoInfo);
|
|
||||||
} else if (pushedAt < twoYearsAgo) {
|
|
||||||
results.stale.push(repoInfo);
|
|
||||||
} else if (pushedAt < oneYearAgo) {
|
|
||||||
results.inactive.push(repoInfo);
|
|
||||||
} else {
|
|
||||||
results.healthy.push(repoInfo);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
LOG.error('Batch fetch error:', error.message);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Rate limiting - wait a bit between batches
|
|
||||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
|
||||||
}
|
|
||||||
|
|
||||||
return results;
|
|
||||||
}
|
|
||||||
|
|
||||||
function generateReport(results) {
|
|
||||||
const report = [];
|
|
||||||
|
|
||||||
report.push('# 🏥 Awesome Docker - Health Check Report\n');
|
|
||||||
report.push(`**Generated:** ${new Date().toISOString()}\n`);
|
|
||||||
report.push(`**Total Repositories:** ${results.total}\n`);
|
|
||||||
|
|
||||||
report.push('\n## 📊 Summary\n');
|
|
||||||
report.push(`- ✅ Healthy (updated in last year): ${results.healthy.length}`);
|
|
||||||
report.push(`- ⚠️ Inactive (1-2 years): ${results.inactive.length}`);
|
|
||||||
report.push(`- 🪦 Stale (2+ years): ${results.stale.length}`);
|
|
||||||
report.push(`- 📦 Archived: ${results.archived.length}`);
|
|
||||||
report.push(`- 🚫 Disabled: ${results.disabled.length}\n`);
|
|
||||||
|
|
||||||
if (results.archived.length > 0) {
|
|
||||||
report.push('\n## 📦 Archived Repositories (Should mark as :skull:)\n');
|
|
||||||
results.archived.forEach(repo => {
|
|
||||||
report.push(`- [${repo.name}](${repo.url}) - ⭐ ${repo.stars} - Last push: ${repo.pushedAt}`);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (results.stale.length > 0) {
|
|
||||||
report.push('\n## 🪦 Stale Repositories (No activity in 2+ years)\n');
|
|
||||||
results.stale.slice(0, 50).forEach(repo => {
|
|
||||||
report.push(`- [${repo.name}](${repo.url}) - ⭐ ${repo.stars} - Last push: ${repo.pushedAt}`);
|
|
||||||
});
|
|
||||||
if (results.stale.length > 50) {
|
|
||||||
report.push(`\n... and ${results.stale.length - 50} more`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (results.inactive.length > 0) {
|
|
||||||
report.push('\n## ⚠️ Inactive Repositories (No activity in 1-2 years)\n');
|
|
||||||
report.push('_These may still be stable/complete projects - review individually_\n');
|
|
||||||
results.inactive.slice(0, 30).forEach(repo => {
|
|
||||||
report.push(`- [${repo.name}](${repo.url}) - ⭐ ${repo.stars} - Last push: ${repo.pushedAt}`);
|
|
||||||
});
|
|
||||||
if (results.inactive.length > 30) {
|
|
||||||
report.push(`\n... and ${results.inactive.length - 30} more`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return report.join('\n');
|
|
||||||
}
|
|
||||||
|
|
||||||
async function main() {
|
|
||||||
const markdown = await fs.readFile(README, 'utf8');
|
|
||||||
let links = helper.extract_all_links(markdown);
|
|
||||||
|
|
||||||
const github_links = links.filter(link =>
|
|
||||||
link.startsWith('https://github.com') &&
|
|
||||||
!helper.exclude_from_list(link) &&
|
|
||||||
!link.includes('/issues') &&
|
|
||||||
!link.includes('/pull') &&
|
|
||||||
!link.includes('/wiki') &&
|
|
||||||
!link.includes('#')
|
|
||||||
);
|
|
||||||
|
|
||||||
const repos = extract_repos(github_links);
|
|
||||||
const results = await checkRepoHealth(repos);
|
|
||||||
|
|
||||||
const report = generateReport(results);
|
|
||||||
|
|
||||||
// Save report
|
|
||||||
await fs.writeFile('HEALTH_REPORT.md', report);
|
|
||||||
LOG.info('Health report saved to HEALTH_REPORT.md');
|
|
||||||
|
|
||||||
// Also print summary to console
|
|
||||||
console.log('\n' + report);
|
|
||||||
|
|
||||||
// Exit with error if there are actionable items
|
|
||||||
if (results.archived.length > 0 || results.stale.length > 10) {
|
|
||||||
LOG.warn(`Found ${results.archived.length} archived and ${results.stale.length} stale repos`);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('Starting health check...');
|
|
||||||
main();
|
|
||||||
@@ -1,69 +0,0 @@
|
|||||||
import fs from 'fs-extra';
|
|
||||||
import helper from './common.mjs';
|
|
||||||
|
|
||||||
console.log({
|
|
||||||
DEBUG: process.env.DEBUG || false,
|
|
||||||
});
|
|
||||||
|
|
||||||
const README = 'README.md';
|
|
||||||
|
|
||||||
async function main() {
|
|
||||||
const has_error = {
|
|
||||||
show: false,
|
|
||||||
duplicates: '',
|
|
||||||
other_links_error: '',
|
|
||||||
};
|
|
||||||
const markdown = await fs.readFile(README, 'utf8');
|
|
||||||
let links = helper.extract_all_links(markdown);
|
|
||||||
links = links.filter((l) => !helper.exclude_from_list(l)); // exclude websites
|
|
||||||
helper.LOG.debug_string({ links });
|
|
||||||
|
|
||||||
console.log(`total links to check ${links.length}`);
|
|
||||||
|
|
||||||
console.log('checking for duplicates links...');
|
|
||||||
|
|
||||||
const duplicates = helper.find_duplicates(links);
|
|
||||||
if (duplicates.length > 0) {
|
|
||||||
has_error.show = true;
|
|
||||||
has_error.duplicates = duplicates;
|
|
||||||
}
|
|
||||||
helper.LOG.debug_string({ duplicates });
|
|
||||||
const [github_links, external_links] = helper.partition(links, (link) =>
|
|
||||||
link.startsWith('https://github.com'),
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`checking ${external_links.length} external links...`);
|
|
||||||
|
|
||||||
const external_links_error = await helper.batch_fetch({
|
|
||||||
arr: external_links,
|
|
||||||
get: helper.fetch_link,
|
|
||||||
post_filter_func: (x) => !x[1].ok,
|
|
||||||
BATCH_SIZE: 8,
|
|
||||||
});
|
|
||||||
if (external_links_error.length > 0) {
|
|
||||||
has_error.show = true;
|
|
||||||
has_error.other_links_error = external_links_error;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`checking ${github_links.length} GitHub repositories...`);
|
|
||||||
|
|
||||||
console.log(
|
|
||||||
`skipping GitHub repository check. Run "npm run test" to execute them manually.`,
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log({
|
|
||||||
TEST_PASSED: !has_error.show,
|
|
||||||
EXTERNAL_LINKS: external_links.length,
|
|
||||||
});
|
|
||||||
|
|
||||||
if (has_error.show) {
|
|
||||||
helper.LOG.error_string(has_error);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('starting...');
|
|
||||||
main().catch((error) => {
|
|
||||||
console.error('Fatal error:', error);
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
@@ -1,127 +0,0 @@
|
|||||||
import fs from 'fs-extra';
|
|
||||||
import fetch from 'node-fetch';
|
|
||||||
import helper from './common.mjs';
|
|
||||||
|
|
||||||
function envvar_undefined(variable_name) {
|
|
||||||
throw new Error(`${variable_name} must be defined`);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log({
|
|
||||||
DEBUG: process.env.DEBUG || false,
|
|
||||||
});
|
|
||||||
|
|
||||||
const README = 'README.md';
|
|
||||||
const GITHUB_GQL_API = 'https://api.github.com/graphql';
|
|
||||||
const TOKEN = process.env.GITHUB_TOKEN || envvar_undefined('GITHUB_TOKEN');
|
|
||||||
|
|
||||||
const Authorization = `token ${TOKEN}`;
|
|
||||||
|
|
||||||
const make_GQL_options = (query) => ({
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
Authorization,
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
'user-agent':
|
|
||||||
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36',
|
|
||||||
},
|
|
||||||
body: JSON.stringify({ query }),
|
|
||||||
});
|
|
||||||
|
|
||||||
const extract_repos = (arr) =>
|
|
||||||
arr
|
|
||||||
.map((e) => e.substr('https://github.com/'.length).split('/'))
|
|
||||||
.filter((r) => r.length === 2 && r[1] !== '');
|
|
||||||
|
|
||||||
const generate_GQL_query = (arr) =>
|
|
||||||
`query AWESOME_REPOS{ ${arr
|
|
||||||
.map(
|
|
||||||
([owner, name]) =>
|
|
||||||
`repo_${owner.replace(/(-|\.)/g, '_')}_${name.replace(
|
|
||||||
/(-|\.)/g,
|
|
||||||
'_',
|
|
||||||
)}: repository(owner: "${owner}", name:"${name}"){ nameWithOwner isArchived } `,
|
|
||||||
)
|
|
||||||
.join('')} }`;
|
|
||||||
|
|
||||||
async function main() {
|
|
||||||
const has_error = {
|
|
||||||
show: false,
|
|
||||||
duplicates: '',
|
|
||||||
other_links_error: '',
|
|
||||||
github_repos: '',
|
|
||||||
};
|
|
||||||
const markdown = await fs.readFile(README, 'utf8');
|
|
||||||
let links = helper.extract_all_links(markdown);
|
|
||||||
links = links.filter((l) => !helper.exclude_from_list(l)); // exclude websites
|
|
||||||
helper.LOG.debug_string({ links });
|
|
||||||
|
|
||||||
console.log(`total links to check ${links.length}`);
|
|
||||||
|
|
||||||
console.log('checking for duplicates links...');
|
|
||||||
|
|
||||||
const duplicates = helper.find_duplicates(links);
|
|
||||||
if (duplicates.length > 0) {
|
|
||||||
has_error.show = true;
|
|
||||||
has_error.duplicates = duplicates;
|
|
||||||
}
|
|
||||||
helper.LOG.debug_string({ duplicates });
|
|
||||||
const [github_links, external_links] = helper.partition(links, (link) =>
|
|
||||||
link.startsWith('https://github.com'),
|
|
||||||
);
|
|
||||||
|
|
||||||
console.log(`checking ${external_links.length} external links...`);
|
|
||||||
|
|
||||||
const external_links_error = await helper.batch_fetch({
|
|
||||||
arr: external_links,
|
|
||||||
get: helper.fetch_link,
|
|
||||||
post_filter_func: (x) => !x[1].ok,
|
|
||||||
BATCH_SIZE: 8,
|
|
||||||
});
|
|
||||||
if (external_links_error.length > 0) {
|
|
||||||
has_error.show = true;
|
|
||||||
has_error.other_links_error = external_links_error;
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`checking ${github_links.length} GitHub repositories...`);
|
|
||||||
|
|
||||||
const repos = extract_repos(github_links);
|
|
||||||
const query = generate_GQL_query(repos);
|
|
||||||
const options = make_GQL_options(query);
|
|
||||||
const gql_response = await fetch(GITHUB_GQL_API, options).then((r) =>
|
|
||||||
r.json(),
|
|
||||||
);
|
|
||||||
if (gql_response.errors) {
|
|
||||||
has_error.show = true;
|
|
||||||
has_error.github_repos = gql_response.errors;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check for archived repositories
|
|
||||||
console.log('checking for archived repositories...');
|
|
||||||
const archived_repos = [];
|
|
||||||
if (gql_response.data) {
|
|
||||||
for (const [key, repo] of Object.entries(gql_response.data)) {
|
|
||||||
if (repo && repo.isArchived) {
|
|
||||||
archived_repos.push(repo.nameWithOwner);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (archived_repos.length > 0) {
|
|
||||||
console.warn(`⚠️ Found ${archived_repos.length} archived repositories that should be marked with :skull:`);
|
|
||||||
console.warn('Archived repos:', archived_repos);
|
|
||||||
// Don't fail the build, just warn
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log({
|
|
||||||
TEST_PASSED: has_error.show,
|
|
||||||
GITHUB_REPOSITORY: github_links.length,
|
|
||||||
EXTERNAL_LINKS: external_links.length,
|
|
||||||
});
|
|
||||||
|
|
||||||
if (has_error.show) {
|
|
||||||
helper.LOG.error_string(has_error);
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log('starting...');
|
|
||||||
main();
|
|
||||||
@@ -1,229 +0,0 @@
|
|||||||
<!DOCTYPE html>
|
|
||||||
<html class="no-js" lang="en">
|
|
||||||
<head>
|
|
||||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
|
||||||
<meta http-equiv="Cache-control" content="public" />
|
|
||||||
<meta charset="UTF-8" />
|
|
||||||
<title>Awesome-docker</title>
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
|
||||||
<meta name="theme-color" content="#5DBCD2" />
|
|
||||||
<meta
|
|
||||||
name="description"
|
|
||||||
content="A curated list of Docker resources and projects."
|
|
||||||
/>
|
|
||||||
<meta
|
|
||||||
name="keywords"
|
|
||||||
content="free and open-source open source projects for docker moby kubernetes linux awesome awesome-list container tools dockerfile list moby docker-container docker-image docker-environment docker-deployment docker-swarm docker-api docker-monitoring docker-machine docker-security docker-registry"
|
|
||||||
/>
|
|
||||||
<meta
|
|
||||||
name="google-site-verification"
|
|
||||||
content="_yiugvz0gCtfsBLyLl1LnkALXb6D4ofiwCyV1XOlYBM"
|
|
||||||
/>
|
|
||||||
<link rel="icon" type="image/png" href="favicon.png" />
|
|
||||||
<style>
|
|
||||||
* {
|
|
||||||
box-sizing: border-box;
|
|
||||||
}
|
|
||||||
|
|
||||||
html {
|
|
||||||
font-family: sans-serif;
|
|
||||||
-ms-text-size-adjust: 100%;
|
|
||||||
-webkit-text-size-adjust: 100%;
|
|
||||||
}
|
|
||||||
|
|
||||||
body {
|
|
||||||
padding: 0;
|
|
||||||
margin: 0;
|
|
||||||
font-family: Open Sans, Helvetica Neue, Helvetica, Arial, sans-serif;
|
|
||||||
font-size: 16px;
|
|
||||||
line-height: 1.5;
|
|
||||||
color: #606c71;
|
|
||||||
}
|
|
||||||
|
|
||||||
section {
|
|
||||||
display: block;
|
|
||||||
}
|
|
||||||
|
|
||||||
a {
|
|
||||||
background-color: transparent;
|
|
||||||
color: #5dbcd2;
|
|
||||||
text-decoration: none;
|
|
||||||
}
|
|
||||||
|
|
||||||
strong {
|
|
||||||
font-weight: 700;
|
|
||||||
}
|
|
||||||
|
|
||||||
h1 {
|
|
||||||
font-size: 2em;
|
|
||||||
margin: 0.67em 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
img {
|
|
||||||
border: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
svg:not(:root) {
|
|
||||||
overflow: hidden;
|
|
||||||
}
|
|
||||||
|
|
||||||
.btn {
|
|
||||||
display: inline-block;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
color: hsla(0, 0%, 100%, 0.7);
|
|
||||||
background-color: hsla(0, 0%, 100%, 0.08);
|
|
||||||
border: 1px solid hsla(0, 0%, 100%, 0.2);
|
|
||||||
border-radius: 0.3rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.page-header {
|
|
||||||
color: #fff;
|
|
||||||
text-align: center;
|
|
||||||
background-color: #5dbcd2;
|
|
||||||
background-image: linear-gradient(120deg, #155799, #5dbcd2);
|
|
||||||
}
|
|
||||||
|
|
||||||
.project-name {
|
|
||||||
margin-top: 0;
|
|
||||||
margin-bottom: 0.1rem;
|
|
||||||
}
|
|
||||||
|
|
||||||
.project-tagline {
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
font-weight: 400;
|
|
||||||
opacity: 0.7;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content {
|
|
||||||
word-wrap: break-word;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content :first-child {
|
|
||||||
margin-top: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content h1,
|
|
||||||
.main-content h4 {
|
|
||||||
margin-top: 2rem;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
font-weight: 400;
|
|
||||||
color: #5dbcd2;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content p {
|
|
||||||
margin-bottom: 1em;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content blockquote {
|
|
||||||
padding: 0 1rem;
|
|
||||||
margin-left: 0;
|
|
||||||
color: #819198;
|
|
||||||
border-left: 0.3rem solid #dce6f0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content blockquote > :first-child {
|
|
||||||
margin-top: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content blockquote > :last-child {
|
|
||||||
margin-bottom: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.main-content img {
|
|
||||||
max-width: 100%;
|
|
||||||
}
|
|
||||||
|
|
||||||
@media screen and (min-width: 64em) {
|
|
||||||
.btn {
|
|
||||||
padding: 0.75rem 1rem;
|
|
||||||
}
|
|
||||||
.page-header {
|
|
||||||
padding: 5rem 6rem;
|
|
||||||
}
|
|
||||||
.project-name {
|
|
||||||
font-size: 3.25rem;
|
|
||||||
}
|
|
||||||
.project-tagline {
|
|
||||||
font-size: 1.25rem;
|
|
||||||
}
|
|
||||||
.main-content {
|
|
||||||
max-width: 64rem;
|
|
||||||
padding: 2rem 6rem;
|
|
||||||
margin: 0 auto;
|
|
||||||
font-size: 1.1rem;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@media screen and (min-width: 42em) and (max-width: 64em) {
|
|
||||||
.btn {
|
|
||||||
padding: 0.6rem 0.9rem;
|
|
||||||
font-size: 0.9rem;
|
|
||||||
}
|
|
||||||
.page-header {
|
|
||||||
padding: 3rem 4rem;
|
|
||||||
}
|
|
||||||
.project-name {
|
|
||||||
font-size: 2.25rem;
|
|
||||||
}
|
|
||||||
.project-tagline {
|
|
||||||
font-size: 1.15rem;
|
|
||||||
}
|
|
||||||
.main-content {
|
|
||||||
padding: 2rem 4rem;
|
|
||||||
font-size: 1.1rem;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@media screen and (max-width: 42em) {
|
|
||||||
.btn {
|
|
||||||
display: block;
|
|
||||||
width: 100%;
|
|
||||||
padding: 0.75rem;
|
|
||||||
font-size: 0.9rem;
|
|
||||||
}
|
|
||||||
.page-header {
|
|
||||||
padding: 2rem 1rem;
|
|
||||||
}
|
|
||||||
.project-name {
|
|
||||||
font-size: 1.75rem;
|
|
||||||
}
|
|
||||||
.project-tagline {
|
|
||||||
font-size: 1rem;
|
|
||||||
}
|
|
||||||
.main-content {
|
|
||||||
padding: 2rem 1rem;
|
|
||||||
font-size: 1rem;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
<body>
|
|
||||||
<section class="page-header">
|
|
||||||
<h1 class="project-name">Awesome-docker</h1>
|
|
||||||
<h2 class="project-tagline">
|
|
||||||
A curated list of Docker resources and projects
|
|
||||||
</h2>
|
|
||||||
<a href="https://github.com/veggiemonk/awesome-docker" class="btn"
|
|
||||||
>View on GitHub</a
|
|
||||||
>
|
|
||||||
<br />
|
|
||||||
<!-- Place this tag where you want the button to render. -->
|
|
||||||
<a
|
|
||||||
class="github-button"
|
|
||||||
href="https://github.com/veggiemonk/awesome-docker#readme"
|
|
||||||
data-icon="octicon-star"
|
|
||||||
data-size="large"
|
|
||||||
data-count-href="/veggiemonk/awesome-docker/stargazers"
|
|
||||||
data-show-count="true"
|
|
||||||
data-count-aria-label="# stargazers on GitHub"
|
|
||||||
aria-label="Star veggiemonk/awesome-docker on GitHub"
|
|
||||||
>Star</a
|
|
||||||
>
|
|
||||||
</section>
|
|
||||||
<section id="md" class="main-content"></section>
|
|
||||||
<!--<script src="index.js"></script> -->
|
|
||||||
<!--Place this tag in your head or just before your close body tag. -->
|
|
||||||
<script async defer src="https://buttons.github.io/buttons.js"></script>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
Reference in New Issue
Block a user