Compare commits

...

114 Commits

Author SHA1 Message Date
Thomas Hobson af5036d82c
Convert some packages to nix-based
Affected packages:
- crystal
- dart
- dash
- deno
- elixir
- erlang
- gawk
2022-01-31 18:02:35 +13:00
Thomas Hobson ddab59ccdd
add limit overrides to runtime definitions 2022-01-31 17:47:27 +13:00
Thomas Hobson 8d6ecfaf37
fix scaffold script file naming 2022-01-31 17:46:48 +13:00
Thomas Hobson 388b781aca
Update scaffold tooling to give test command 2022-01-31 17:18:12 +13:00
Thomas Hobson e3cdbc160c
Merge branch 'nix-packages' of github.com:engineer-man/piston into nix-packages 2022-01-31 17:07:36 +13:00
Thomas Hobson 83e4a1a136
Fix issues after merging upstream
Implements a simple container builder for runtime sets
2022-01-31 17:01:28 +13:00
Thomas Hobson e022e34a37
Add nix runtime testing and pre-installing runtimes 2022-01-31 14:43:54 +13:00
Thomas Hobson 564da5a7eb
BREAKING: replace custom build scripts with nix
General:
- Switched to yarn to better work with nix-based tooling
- Switched package system to use nix. This stops double dependencies and slow cloud compile times, while providing more compile/runtime support to the Nix project
- Removed container builder in favor of internal container tooling
- Package versions no-longer need to be SemVer compliant
- Removed "piston package spec" files, replaced with nix-flake based runtimes
- Exported nosocket and piston-api as packages within the nix-flake
- Removed repo container
- Switched docker building to nix-based container outputting
- Removed docker compose as this is a single container
- Removed package commands from CLI

Packages:
- Move bash, clojure, cobol, node, python2, python3 to new format
- Remainder of packages still need to be moved

v2 API:
- Removed "version" specifier. To select specific versions, use the v3 api
- Removed "/package" endpoints as this doesn't work with the new nix-based system

v3 API:
- Duplicate of v2 API, except instead of passing in a language name an ID is used intead.
2022-01-31 14:42:12 +13:00
Thomas Hobson e06b59d82c
dockerfile to manage mounting of /nix 2022-01-30 22:32:12 +13:00
Thomas Hobson e2f37b7493
install API in container 2022-01-30 22:32:12 +13:00
Thomas Hobson 8d6ae04733
create packages flake 2022-01-30 22:31:43 +13:00
Thomas Hobson 20e71f617b
Remove hacktoberfest notice 2021-12-25 13:39:12 +13:00
Thomas Hobson 104b80df5c
Merge pull request #407 from Hydrazer/brachylog
pkg(brachylog-1.0.0)
2021-12-07 16:27:44 +13:00
Thomas Hobson d2ca0ca18e
Merge branch 'master' into brachylog 2021-12-07 16:20:33 +13:00
Thomas Hobson eb37d0ab72
ci: don 2021-11-29 14:00:10 +13:00
Hydrazer 8d32385b41 pkg(brachylog-1.0.0): added brachylog 2021-11-28 07:03:57 -07:00
Thomas Hobson e95d386697
Merge pull request #406 from Hydrazer/master
pkg(racket-8.3.0): added racket 8.3.0
2021-11-28 17:22:16 +13:00
Hydrazer 132a835b0b pkg(racket-8.3.0): added racket 8.3.0 2021-11-27 20:00:11 -07:00
Thomas Hobson c385c73bc4
Merge pull request #403 from Hydrazer/master
pkg(retina-1.2.0): added retina 1.2.0
2021-11-28 14:09:52 +13:00
Thomas Hobson ef9b22f154
Merge branch 'master' into master 2021-11-28 14:01:18 +13:00
Thomas Hobson 79f1ca50a5
Merge pull request #405 from Jonxslays/task/piston_rs
Add piston_rs to readme
2021-11-28 13:59:49 +13:00
Jonxslays d76aa7527e
Add piston_rs to readme 2021-11-27 12:26:28 -07:00
Hydrazer 6cee1e8c34 pkg(retina-1.2.0): added retina 1.2.0 2021-11-27 03:59:56 -07:00
Thomas Hobson 507233400d
Merge pull request #399 from engineer-man/dependabot/pip/docs/mkdocs-1.2.3
build(deps): bump mkdocs from 1.1.2 to 1.2.3 in /docs
2021-11-26 04:26:04 +13:00
dependabot[bot] 0085bd7217
build(deps): bump mkdocs from 1.1.2 to 1.2.3 in /docs
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.1.2 to 1.2.3.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.1.2...1.2.3)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2021-11-25 14:56:13 +00:00
Thomas Hobson de4d12caa7
Merge pull request #398 from Hydrazer/master
pkg(befunge93-0.2.0): added befunge93
2021-11-26 03:55:44 +13:00
Hydrazer 0949610b61 pkg(befunge93-0.2.0): added befunge93 2021-11-25 07:42:00 -07:00
Thomas Hobson 142e7912fa
api(config): respect $PORT as used by heroku if no explicit bind address provided 2021-11-26 02:32:28 +13:00
Thomas Hobson 3a2dc74586
Merge pull request #395 from VlaDexa/updated-deno
pkg(deno-1.16.2): Added deno 1.16.2
2021-11-25 19:00:24 +13:00
VlaDexa 81c6694637 pkg(deno-1.16.2): Set the cache directory 2021-11-21 23:27:59 +03:00
VlaDexa ed14e125e3 pkg(deno-1.16.2): Fix the test file 2021-11-21 23:18:39 +03:00
VlaDexa aeddcc7661 pkg(deno-1.16.2): Added deno 1.16.2 2021-11-21 21:39:40 +03:00
Thomas Hobson 1eddc8dcf7
Merge pull request #393 from VlaDexa/updated-rust
pkg(rust-1.56.1): Added rust 1.56.1
2021-11-19 13:44:56 +13:00
VlaDexa c2d9367b99 pkg(rust-1.56.1): Added rust 1.56.1 2021-11-18 17:36:10 +03:00
Thomas Hobson c091c117c7
api(job): Decrease safe_call CPU time
By increasing the niceness value of child processes, the scheduler gives less CPU Time to them.
This allows the node process to get more CPU time, and can work through the event queue faster.
2021-11-11 21:34:30 +13:00
Thomas Hobson c7efa5372a
api(job): Switch process cleanup to sync
The system used to use async.
This would result in execution is handed off to other processes.
In the case of a forkbomb was used, it could circumvent the janitor as it consumed more CPU time which would prevent the process janitor from reading the process information in.
2021-11-11 19:27:54 +13:00
Thomas Hobson d8b430654b
Merge pull request #382 from Brikaa/file
pkg(file-0.0.1): Run executable
2021-10-21 21:19:40 +13:00
Thomas Hobson 78b075089f
Merge branch 'master' into file 2021-10-21 21:13:06 +13:00
Thomas Hobson b0d8de7fc3
Merge pull request #384 from ccall48/master
add missing libs back into py v3.10
2021-10-17 15:25:10 +13:00
Cory cd2b471eed add missing libs back into py v3.10 2021-10-17 11:46:17 +11:00
Brikaa 52fb900603 Don't read the file twice 2021-10-16 11:55:22 +02:00
Thomas Hobson f9aac54735
Merge pull request #377 from dvargas46/master
fix pkg build action for when pkg files are deleted
2021-10-16 22:50:12 +13:00
Brikaa 19252467cb Use some() instead of filter() in v2.js 2021-10-16 11:50:07 +02:00
Brikaa 821d5496e4 Check for base64 in CLI 2021-10-16 11:50:07 +02:00
Brikaa 64833a0408 pkg(file-0.0.1): Run executable 2021-10-16 11:49:59 +02:00
Thomas Hobson 85cba0d89d
Merge pull request #383 from Brikaa/let-const
Adhere to project let, const practices in my code
2021-10-16 22:48:18 +13:00
Brikaa 6ca9606f81 Adhere to project let, const practices in my code 2021-10-16 11:01:17 +02:00
Thomas Hobson 4483cdbe3e
Merge pull request #381 from engineer-man/binary
require arleast 1 utf-8 encoded file
2021-10-16 00:40:41 +13:00
Thomas Hobson 7d218f11f4
Merge pull request #373 from engineer-man/binary
Support for uploading files in base64/hex format
2021-10-15 16:15:02 +13:00
Thomas Hobson 7e5844bcb1
require arleast 1 utf-8 encoded file 2021-10-15 16:14:06 +13:00
Thomas Hobson 901f301f9b
Merge pull request #378 from Brikaa/pre-commit
Fix Piston script after rebase
2021-10-14 23:47:35 +13:00
Brikaa dfd6beb6ed Fix piston script after rebase 2021-10-14 12:44:51 +02:00
Thomas Hobson 5822a209ed
Merge pull request #376 from Pyroseza/python-3.10
remove unneeded compile file from python 3.10.0 package
2021-10-14 13:45:12 +13:00
Pyroseza 280456ad49
Merge branch 'engineer-man:master' into python-3.10 2021-10-13 22:31:02 +01:00
Pyroseza 8403e0f512 added extra alias for python 3.10 2021-10-13 22:30:38 +01:00
Dan Vargas 0f440c082b fix pkg build action for when pkg files are deleted 2021-10-13 16:24:02 -05:00
Pyroseza 5641f671d1 remove unneeded compile file from python 3.10.0 package 2021-10-13 20:33:29 +01:00
Thomas Hobson b654a77ced
Merge pull request #375 from Pyroseza/python-3.10
added python 3.10
2021-10-14 05:18:54 +13:00
Pyroseza fecfed48fd added python 3.10 2021-10-13 16:56:12 +01:00
Thomas Hobson 0faea205db
Only compile/run files in utf8 encoding 2021-10-14 01:36:29 +13:00
Thomas Hobson 24a352699d
Support for uploading files in base64/hex format 2021-10-14 00:46:49 +13:00
Thomas Hobson 8a89af7512
Merge pull request #366 from Brikaa/pre-commit
Improve pre-commit
2021-10-13 23:17:52 +13:00
Thomas Hobson ddb3703a0d
Merge branch 'master' into pre-commit 2021-10-13 23:17:44 +13:00
Thomas Hobson 94a5568121
Merge pull request #371 from Brikaa/timeout
-1 for no timeout (infinite timeout)
2021-10-13 23:15:29 +13:00
Thomas Hobson e97b6d426d
Merge pull request #367 from Brikaa/constraints-bug
Fix important constraints bug
2021-10-13 06:43:26 +13:00
Thomas Hobson 72c11467e3
Merge pull request #370 from Brikaa/zig
Increase zig compile timeout
2021-10-13 06:41:54 +13:00
Thomas Hobson 6b2a5a9d33
Merge pull request #368 from Brikaa/env-val-bug
Fix env_val bug
2021-10-13 06:40:53 +13:00
Thomas Hobson d1b87e8017
Merge pull request #372 from dvargas46/master
improve piston shell script
2021-10-13 06:40:03 +13:00
Dan Vargas 198d8ff061 improve piston shell script
- fix portability & using piston within a symlink
- only install cli npm modules on update or first use
- allow building packages with custom builder
- fix all shellchecks except SC2164
2021-10-12 12:21:20 -05:00
Brikaa 80eefaa6fb Increase zig compile timeout 2021-10-11 13:26:43 +02:00
Brikaa afd71cc82d Fix env_val bug 2021-10-10 22:08:55 +02:00
Brikaa 2f114d6e54 Fix important constraints bug 2021-10-10 19:20:17 +02:00
Brikaa 5968090f50 Improve precommit 2021-10-10 18:48:34 +02:00
Brikaa f973ecf281 Add infinite timeout 2021-10-10 17:18:31 +02:00
Thomas Hobson 90945d1621
Merge pull request #365 from Brikaa/mono
Revert Mono error separation to work with websockets
2021-10-11 02:17:16 +13:00
Brikaa 7fc0b9efb8 Revert Mono error separation to work with websockets 2021-10-10 14:40:58 +02:00
Thomas Hobson 8cb54cfe58
Merge pull request #355 from Brikaa/lint
Add ./piston lint
2021-10-11 00:19:58 +13:00
Thomas Hobson 40d70ac086
Merge pull request #363 from Hydrazer/master
pkg(vyxal-2.4.1): add vyxal
2021-10-11 00:18:50 +13:00
Hydrazer 6416b4d8cb pkg(vyxal-2.4.1): add vyxal 2021-10-09 19:18:07 -06:00
Brikaa f8eb7053ed Fix typescript 2021-10-09 18:16:41 +02:00
Brikaa f2c91acbe6 Piston lint 2021-10-09 18:10:58 +02:00
Thomas Hobson d61fb8ec5b
Merge pull request #356 from Brikaa/remove-container
Remove stopped package builder container
2021-10-09 22:51:49 +13:00
Thomas Hobson 0e7775f5d6
Merge pull request #359 from Hydrazer/master
pkg(husk-1.0.0): add husk
2021-10-09 15:12:25 +13:00
Hydrazer 7ff87cf0f2 pkg(husk-1.0.0): add husk 2021-10-08 17:14:08 -06:00
Brikaa cfbb62f5bf Remove stopped package builder container 2021-10-08 17:20:10 +02:00
Thomas Hobson 2ae63a4d69
Merge pull request #354 from Niki4tap/master
Added a package to support LLVM IR
2021-10-07 02:49:26 +13:00
Niki4tap fbee9e6c22 pkg(llvm_ir-12.0.1): Add the package to readme and `ll` alias 2021-10-06 13:32:54 +00:00
Niki4tap 478ccefa58 pkg(llvm_ir-12.0.1): Fixed test.ll 2021-10-06 13:00:58 +00:00
Niki4tap 70cf0b1a90 Merged master branch 2021-10-06 12:57:16 +00:00
Niki4tap 82d6dfd3e9 pkg(llvm_ir-12.0.1): Added llvm_ir 12.0.1 2021-10-06 12:52:59 +00:00
Thomas Hobson 69ea3ec7a0
Merge pull request #353 from Brikaa/docs
Document limit overrides, timeouts, provides and local testing
2021-10-07 00:44:01 +13:00
Thomas Hobson c6ccf642f6
Merge pull request #347 from dvargas46/master
prevent building empty package on pr
2021-10-06 17:22:37 +13:00
Thomas Hobson fec08cbce5
Merge pull request #349 from milindmadhukar/master
Added Go-Piston to the Readme
2021-10-06 17:21:59 +13:00
Brikaa 363daa2a24 Document limit_overrides, timeouts, provides and local testing 2021-10-04 21:35:13 +02:00
Thomas Hobson eb6d00c9d7
Merge pull request #352 from Brikaa/dotnet
pkg(dotnet-5.0.201): Added F#.net, F# interactive and VB.net
2021-10-05 05:11:42 +13:00
Brikaa aa4b94a237 Add csharp.net, fsharp.net basic.net to readme 2021-10-04 17:39:08 +02:00
Brikaa d3cbb64bd7 Merge branch 'master' into dotnet 2021-10-04 17:36:59 +02:00
Brikaa adae6fde2f pkg(dotnet-5.0.201): Added F#.net, F# interactive and VB.net 2021-10-04 17:25:11 +02:00
Brikaa a5c3858100 Add per-language constraint overrides 2021-10-04 17:11:46 +02:00
Brikaa c0f203537c config.js: timeout, overrides 2021-10-04 17:11:46 +02:00
Hydrazer 1c48ca2ab5 pkg(japt-2.0.0): add japt 2021-10-04 17:11:46 +02:00
Dan Vargas 4f5ef06adf pkg(freebasic-1.8.0): Add Freebasic 2021-10-04 17:11:46 +02:00
Brikaa 883d584c15 pkg(iverilog-11.0.0): Added iverilog 11.0.0 2021-10-04 17:11:46 +02:00
Dan Vargas e5732ef459 pkg(forte-1.0.0): add forte 2021-10-04 17:11:46 +02:00
Brikaa b36ce650dd Add ./piston logs 2021-10-04 17:11:46 +02:00
Brikaa 241b56a5e9 Add semantic versioning in CONTRIBUTING.MD 2021-10-04 17:11:46 +02:00
Thomas Hobson 36d72383a5 rework process janitor
Old process janitor required starting a `ps` process.
This was problematic, as `ps` requires another entry in the process table, which in some cases was impossible as it was exhausted.
2021-10-04 17:11:46 +02:00
Thomas Hobson 5d392effcc api: maximum concurrent jobs and potential fix for gcc 2021-10-04 17:11:46 +02:00
Thomas Hobson c24937aeb7 Add self to license 2021-10-04 17:11:46 +02:00
Thomas Hobson 6f3ad0f9ed
Merge pull request #348 from Brikaa/overrides
Add per-language limit overrides
2021-10-05 04:03:08 +13:00
Brikaa 94af5639bf Add per-language constraint overrides 2021-10-04 16:56:08 +02:00
Vargas, Dan 78d0ce156f prevent building empty package on pr 2021-10-02 19:51:07 -06:00
milindmadhukar 46ecd482bb
Added Go-Piston to the Readme 2021-10-02 23:38:11 +05:30
Brikaa 4870441574 config.js: timeout, overrides 2021-10-02 11:05:11 +02:00
145 changed files with 1921 additions and 1027 deletions

View File

@ -4,7 +4,6 @@ about: Template for requesting language support
title: Add [insert language name here]
labels: package
assignees: ''
---
Provide links to different compilers/interpreters that could be used to implement this language, and discuss pros/cons of each.

View File

@ -1,10 +1,11 @@
Checklist:
* [ ] The package builds locally with `./piston build-pkg [package] [version]`
* [ ] The package installs with `./piston ppman install [package]=[version]`
* [ ] The package runs the test code with `./piston run [package] -l [version] packages/[package]/[version]/test.*`
* [ ] Package files are placed in the correct directory
* [ ] No old package versions are removed
* [ ] All source files are deleted in the `build.sh` script
* [ ] `metadata.json`'s `language` and `version` fields match the directory path
* [ ] Any extensions the language may use are set as aliases
* [ ] Any alternative names the language is referred to are set as aliases.
- [ ] The package builds locally with `./piston build-pkg [package] [version]`
- [ ] The package installs with `./piston ppman install [package]=[version]`
- [ ] The package runs the test code with `./piston run [package] -l [version] packages/[package]/[version]/test.*`
- [ ] Package files are placed in the correct directory
- [ ] No old package versions are removed
- [ ] All source files are deleted in the `build.sh` script
- [ ] `metadata.json`'s `language` and `version` fields match the directory path
- [ ] Any extensions the language may use are set as aliases
- [ ] Any alternative names the language is referred to are set as aliases.

View File

@ -7,7 +7,6 @@ on:
paths:
- api/**
jobs:
push_to_registry:
runs-on: ubuntu-latest

View File

@ -1,14 +1,13 @@
name: "Package Pull Requests"
name: 'Package Pull Requests'
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
paths:
- "packages/**"
- 'packages/**'
jobs:
check-pkg:
@ -53,7 +52,7 @@ jobs:
- name: Build Packages
run: |
PACKAGES=$(jq '.[]' -r ${HOME}/files.json | awk -F/ '{ print $2 "-" $3 }' | sort -u)
PACKAGES=$(jq '.[]' -r ${HOME}/files*.json | awk -F/ '$1~/packages/ && $2 && $3{ print $2 "-" $3 }' | sort -u)
echo "Packages: $PACKAGES"
docker pull docker.pkg.github.com/engineer-man/piston/repo-builder:latest
docker build -t repo-builder repo

View File

@ -8,7 +8,6 @@ on:
paths:
- packages/**
jobs:
build-pkg:
name: Build package
@ -31,7 +30,7 @@ jobs:
- name: Build Packages
run: |
PACKAGES=$(jq '.[]' -r ${HOME}/files.json | awk -F/ '{ print $2 "-" $3 }' | sort -u)
PACKAGES=$(jq '.[]' -r ${HOME}/files*.json | awk -F/ '$1~/packages/ && $2 && $3{ print $2 "-" $3 }' | sort -u)
echo "Packages: $PACKAGES"
docker pull docker.pkg.github.com/engineer-man/piston/repo-builder:latest
docker build -t repo-builder repo
@ -51,9 +50,9 @@ jobs:
runs-on: ubuntu-latest
needs: build-pkg
steps:
- name: "Download all release assets"
- name: 'Download all release assets'
run: curl -s https://api.github.com/repos/engineer-man/piston/releases/latest | jq '.assets[].browser_download_url' -r | xargs -L 1 curl -sLO
- name: "Generate index file"
- name: 'Generate index file'
run: |
echo "" > index
BASEURL=https://github.com/engineer-man/piston/releases/download/pkgs/

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
data/
.piston_env
node_modules
result

12
.prettierignore Normal file
View File

@ -0,0 +1,12 @@
node_modules
data/
api/_piston
repo/build
packages/*/*/*
packages/*.pkg.tar.gz
!packages/*/*/metadata.json
!packages/*/*/build.sh
!packages/*/*/environment
!packages/*/*/run
!packages/*/*/compile
!packages/*/*/test.*

12
Dockerfile.withset Normal file
View File

@ -0,0 +1,12 @@
# This "FROM" image is previously emitted by nix
FROM ghcr.io/engineer-man/piston:base-latest
ENV PISTON_FLAKE_PATH=/piston/packages
COPY runtimes/ /piston/packages/runtimes
COPY flake.nix flake.lock /piston/packages/
ARG RUNTIMESET=all
ENV PISTON_RUNTIME_SET=$RUNTIMESET
RUN piston-install

1
api/.gitignore vendored
View File

@ -1,2 +1 @@
node_modules
_piston

View File

@ -1 +0,0 @@
node_modules

View File

@ -51,6 +51,8 @@ with pkgs; rec {
do
echo "nixbld$i:x:$(( $i + 30000 )):30000:Nix build user $i:/var/empty:/run/current-system/sw/bin/nologin" >> etc/passwd
done
chmod 1777 {,var/}tmp/
'';
config = {
@ -61,6 +63,21 @@ with pkgs; rec {
"SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt"
"GIT_SSL_CAINFO=/etc/ssl/certs/ca-bundle.crt"
"NIX_SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt"
"PATH=${lib.concatStringsSep ":" [
"/usr/local/sbin"
"/usr/local/bin"
"/usr/sbin"
"/usr/bin"
"/sbin"
"/bin"
"/root/.nix-profile/bin"
"/nix/var/nix/profiles/default/bin"
"/nix/var/nix/profiles/default/sbin"
]}"
"MANPATH=${lib.concatStringsSep ":" [
"/root/.nix-profile/share/man"
"/nix/var/nix/profiles/default/share/man"
]}"
];
ExposedPorts = {

View File

@ -3,16 +3,54 @@ const router = express.Router();
const events = require('events');
const config = require('../config');
const runtime = require('../runtime');
const { Job } = require('../job');
const logger = require('logplease').create('api/v3');
const SIGNALS = ["SIGABRT","SIGALRM","SIGBUS","SIGCHLD","SIGCLD","SIGCONT","SIGEMT","SIGFPE","SIGHUP","SIGILL","SIGINFO","SIGINT","SIGIO","SIGIOT","SIGKILL","SIGLOST","SIGPIPE","SIGPOLL","SIGPROF","SIGPWR","SIGQUIT","SIGSEGV","SIGSTKFLT","SIGSTOP","SIGTSTP","SIGSYS","SIGTERM","SIGTRAP","SIGTTIN","SIGTTOU","SIGUNUSED","SIGURG","SIGUSR1","SIGUSR2","SIGVTALRM","SIGXCPU","SIGXFSZ","SIGWINCH"]
const SIGNALS = [
'SIGABRT',
'SIGALRM',
'SIGBUS',
'SIGCHLD',
'SIGCLD',
'SIGCONT',
'SIGEMT',
'SIGFPE',
'SIGHUP',
'SIGILL',
'SIGINFO',
'SIGINT',
'SIGIO',
'SIGIOT',
'SIGKILL',
'SIGLOST',
'SIGPIPE',
'SIGPOLL',
'SIGPROF',
'SIGPWR',
'SIGQUIT',
'SIGSEGV',
'SIGSTKFLT',
'SIGSTOP',
'SIGTSTP',
'SIGSYS',
'SIGTERM',
'SIGTRAP',
'SIGTTIN',
'SIGTTOU',
'SIGUNUSED',
'SIGURG',
'SIGUSR1',
'SIGUSR2',
'SIGVTALRM',
'SIGXCPU',
'SIGXFSZ',
'SIGWINCH',
];
// ref: https://man7.org/linux/man-pages/man7/signal.7.html
function get_job(body) {
const {
let {
language,
args,
stdin,
@ -20,7 +58,7 @@ function get_job(body){
compile_memory_limit,
run_memory_limit,
run_timeout,
compile_timeout
compile_timeout,
} = body;
return new Promise((resolve, reject) => {
@ -35,7 +73,6 @@ function get_job(body){
message: 'files is required as an array',
});
}
for (const [i, file] of files.entries()) {
if (typeof file.content !== 'string') {
return reject({
@ -94,23 +131,65 @@ function get_job(body){
});
}
resolve(new Job({
if (
rt.language !== 'file' &&
!files.some(file => !file.encoding || file.encoding === 'utf8')
) {
return reject({
message: 'files must include at least one utf8 encoded file',
});
}
for (const constraint of ['memory_limit', 'timeout']) {
for (const type of ['compile', 'run']) {
const constraint_name = `${type}_${constraint}`;
const constraint_value = body[constraint_name];
const configured_limit = rt[`${constraint}s`][type];
if (!constraint_value) {
continue;
}
if (typeof constraint_value !== 'number') {
return reject({
message: `If specified, ${constraint_name} must be a number`,
});
}
if (configured_limit <= 0) {
continue;
}
if (constraint_value > configured_limit) {
return reject({
message: `${constraint_name} cannot exceed the configured limit of ${configured_limit}`,
});
}
if (constraint_value < 0) {
return reject({
message: `${constraint_name} must be non-negative`,
});
}
}
}
compile_timeout = compile_timeout || rt.timeouts.compile;
run_timeout = run_timeout || rt.timeouts.run;
compile_memory_limit = compile_memory_limit || rt.memory_limits.compile;
run_memory_limit = run_memory_limit || rt.memory_limits.run;
resolve(
new Job({
runtime: rt,
alias: language,
args: args || [],
stdin: stdin || "",
stdin: stdin || '',
files,
timeouts: {
run: run_timeout || 3000,
compile: compile_timeout || 10000,
run: run_timeout,
compile: compile_timeout,
},
memory_limits: {
run: run_memory_limit || config.run_memory_limit,
compile: compile_memory_limit || config.compile_memory_limit,
}
}));
run: run_memory_limit,
compile: compile_memory_limit,
},
})
);
});
}
router.use((req, res, next) => {
@ -128,87 +207,103 @@ router.use((req, res, next) => {
});
router.ws('/connect', async (ws, req) => {
let job = null;
let eventBus = new events.EventEmitter();
eventBus.on("stdout", (data) => ws.send(JSON.stringify({type: "data", stream: "stdout", data: data.toString()})))
eventBus.on("stderr", (data) => ws.send(JSON.stringify({type: "data", stream: "stderr", data: data.toString()})))
eventBus.on("stage", (stage)=> ws.send(JSON.stringify({type: "stage", stage})))
eventBus.on("exit", (stage, status) => ws.send(JSON.stringify({type: "exit", stage, ...status})))
ws.on("message", async (data) => {
eventBus.on('stdout', data =>
ws.send(
JSON.stringify({
type: 'data',
stream: 'stdout',
data: data.toString(),
})
)
);
eventBus.on('stderr', data =>
ws.send(
JSON.stringify({
type: 'data',
stream: 'stderr',
data: data.toString(),
})
)
);
eventBus.on('stage', stage =>
ws.send(JSON.stringify({ type: 'stage', stage }))
);
eventBus.on('exit', (stage, status) =>
ws.send(JSON.stringify({ type: 'exit', stage, ...status }))
);
ws.on('message', async data => {
try {
const msg = JSON.parse(data);
switch (msg.type) {
case "init":
case 'init':
if (job === null) {
job = await get_job(msg);
await job.prime();
ws.send(JSON.stringify({
type: "runtime",
ws.send(
JSON.stringify({
type: 'runtime',
language: job.runtime.language,
version: job.runtime.version.raw
}))
version: job.runtime.version.raw,
})
);
await job.execute_interactive(eventBus);
ws.close(4999, "Job Completed");
ws.close(4999, 'Job Completed');
} else {
ws.close(4000, "Already Initialized");
ws.close(4000, 'Already Initialized');
}
break;
case "data":
case 'data':
if (job !== null) {
if(msg.stream === "stdin"){
eventBus.emit("stdin", msg.data)
if (msg.stream === 'stdin') {
eventBus.emit('stdin', msg.data);
} else {
ws.close(4004, "Can only write to stdin")
ws.close(4004, 'Can only write to stdin');
}
} else {
ws.close(4003, "Not yet initialized")
ws.close(4003, 'Not yet initialized');
}
break;
case "signal":
case 'signal':
if (job !== null) {
if (SIGNALS.includes(msg.signal)) {
eventBus.emit("signal", msg.signal)
eventBus.emit('signal', msg.signal);
} else {
ws.close(4005, "Invalid signal")
ws.close(4005, 'Invalid signal');
}
} else {
ws.close(4003, "Not yet initialized")
ws.close(4003, 'Not yet initialized');
}
break;
}
} catch (error) {
ws.send(JSON.stringify({type: "error", message: error.message}))
ws.close(4002, "Notified Error")
ws.send(JSON.stringify({ type: 'error', message: error.message }));
ws.close(4002, 'Notified Error');
// ws.close message is limited to 123 characters, so we notify over WS then close.
}
})
});
ws.on("close", async ()=>{
ws.on('close', async () => {
if (job !== null) {
await job.cleanup()
await job.cleanup();
}
})
});
setTimeout(() => {
//Terminate the socket after 1 second, if not initialized.
if(job === null)
ws.close(4001, "Initialization Timeout");
}, 1000)
})
if (job === null) ws.close(4001, 'Initialization Timeout');
}, 1000);
});
router.post('/execute', async (req, res) => {
try {
const job = await get_job(req.body);

View File

@ -16,8 +16,6 @@ const logger = Logger.create('pistond');
const app = express();
expressWs(app);
(async () => {
logger.info('Setting loglevel to', config.log_level);
Logger.setLogLevel(config.log_level);

View File

@ -5,108 +5,105 @@ const config = require('../config');
const Logger = require('logplease');
const logger = Logger.create('test');
const cp = require('child_process');
const runtime = require("../runtime");
const runtime = require('../runtime');
const { Job } = require('../job');
(async function () {
logger.info('Setting loglevel to', config.log_level);
Logger.setLogLevel(config.log_level);
let runtimes_to_test;
let failed = false;
if(process.argv[2] === "--all"){
if (process.argv[2] === '--all') {
// load all
runtimes_to_test = JSON.parse(
cp.execSync(`nix eval ${config.flake_path}#pistonRuntimes --json --apply builtins.attrNames`)
cp.execSync(
`nix eval ${config.flake_path}#pistonRuntimes --json --apply builtins.attrNames`
)
);
} else {
runtimes_to_test = [process.argv[2]];
}
for (const runtime_name of runtimes_to_test) {
const runtime_path = `${config.flake_path}#pistonRuntimes.${runtime_name}`;
logger.info(`Testing runtime ${runtime_path}`);
logger.debug(`Loading runtime metadata`);
const metadata = JSON.parse(cp.execSync(`nix eval --json ${runtime_path}.metadata --json`));
const metadata = JSON.parse(
cp.execSync(`nix eval --json ${runtime_path}.metadata --json`)
);
logger.debug(`Loading runtime tests`);
const tests = JSON.parse(cp.execSync(`nix eval --json ${runtime_path}.tests --json`));
const tests = JSON.parse(
cp.execSync(`nix eval --json ${runtime_path}.tests --json`)
);
logger.debug(`Loading runtime`);
const testable_runtime = new runtime.Runtime({
...metadata,
flake_path: runtime_path
...runtime.Runtime.compute_all_limits(
metadata.language,
metadata.limitOverrides
),
flake_path: runtime_path,
});
testable_runtime.ensure_built();
logger.info(`Running tests`);
for (const test of tests) {
const files = [];
for (const file_name of Object.keys(test.files)) {
const file_content = test.files[file_name];
const this_file = {
name: file_name,
content: file_content
content: file_content,
};
if(file_name == test.main)
files.unshift(this_file);
else
files.push(this_file);
if (file_name == test.main) files.unshift(this_file);
else files.push(this_file);
}
const job = new Job({
runtime: testable_runtime,
args: test.args || [],
stdin: test.stdin || "",
stdin: test.stdin || '',
files,
timeouts: {
run: 3000,
compile: 10000
compile: 10000,
},
memory_limits: {
run: config.run_memory_limit,
compile: config.compile_memory_limit
}
compile: config.compile_memory_limit,
},
});
await job.prime()
const result = await job.execute()
await job.cleanup()
await job.prime();
const result = await job.execute();
await job.cleanup();
if(result.run.stdout.trim() !== "OK"){
if (result.run.stdout.trim() !== 'OK') {
failed = true;
logger.error("Test Failed:")
console.log(job, result)
logger.error('Test Failed:');
console.log(job, result);
} else {
logger.info("Test Passed")
logger.info('Test Passed');
}
}
}
if (failed) {
logger.error("One or more tests failed")
logger.error('One or more tests failed');
process.exit(1);
}
else {
logger.info("All tests passed")
} else {
logger.info('All tests passed');
process.exit(0);
}
})()
})();

View File

@ -2,6 +2,57 @@ const fss = require('fs');
const Logger = require('logplease');
const logger = Logger.create('config');
function parse_overrides(overrides) {
try {
return JSON.parse(overrides);
} catch (e) {
return null;
}
}
function validate_overrides(overrides, options) {
for (const language in overrides) {
for (const key in overrides[language]) {
if (
![
'max_process_count',
'max_open_files',
'max_file_size',
'compile_memory_limit',
'run_memory_limit',
'compile_timeout',
'run_timeout',
'output_max_size',
].includes(key)
) {
logger.error(`Invalid overridden option: ${key}`);
return false;
}
const option = options.find(o => o.key === key);
const parser = option.parser;
const raw = overrides[language][key];
const value = parser(raw);
const validators = option.validators;
for (const validator of validators) {
const response = validator(value, raw);
if (response !== true) {
logger.error(
`Failed to validate overridden option: ${key}`,
response
);
return false;
}
}
overrides[language][key] = value;
}
// Modifies the reference
options[
options.index_of(options.find(o => o.key === 'limit_overrides'))
] = overrides;
}
return true;
}
const options = [
{
key: 'log_level',
@ -17,7 +68,7 @@ const options = [
{
key: 'bind_address',
desc: 'Address to bind REST API on',
default: '0.0.0.0:2000',
default: `0.0.0.0:${process.env["PORT"] || 2000}`,
validators: [],
},
{
@ -91,18 +142,30 @@ const options = [
parser: parse_int,
validators: [(x, raw) => !is_nan(x) || `${raw} is not a number`],
},
{
key: 'compile_timeout',
desc: 'Max time allowed for compile stage in milliseconds',
default: 10000, // 10 seconds
parser: parse_int,
validators: [(x, raw) => !is_nan(x) || `${raw} is not a number`],
},
{
key: 'run_timeout',
desc: 'Max time allowed for run stage in milliseconds',
default: 3000, // 3 seconds
parser: parse_int,
validators: [(x, raw) => !is_nan(x) || `${raw} is not a number`],
},
{
key: 'compile_memory_limit',
desc:
'Max memory usage for compile stage in bytes (set to -1 for no limit)',
desc: 'Max memory usage for compile stage in bytes (set to -1 for no limit)',
default: -1, // no limit
parser: parse_int,
validators: [(x, raw) => !is_nan(x) || `${raw} is not a number`],
},
{
key: 'run_memory_limit',
desc:
'Max memory usage for run stage in bytes (set to -1 for no limit)',
desc: 'Max memory usage for run stage in bytes (set to -1 for no limit)',
default: -1, // no limit
parser: parse_int,
validators: [(x, raw) => !is_nan(x) || `${raw} is not a number`],
@ -124,8 +187,22 @@ const options = [
desc: 'Maximum number of concurrent jobs to run at one time',
default: 64,
parser: parse_int,
validators: [(x) => x > 0 || `${x} cannot be negative`]
}
validators: [x => x > 0 || `${x} cannot be negative`],
},
{
key: 'limit_overrides',
desc: 'Per-language exceptions in JSON format for each of:\
max_process_count, max_open_files, max_file_size, compile_memory_limit,\
run_memory_limit, compile_timeout, run_timeout, output_max_size',
default: {},
parser: parse_overrides,
validators: [
x => !!x || `Invalid JSON format for the overrides\n${x}`,
(overrides, _, options) =>
validate_overrides(overrides, options) ||
`Failed to validate the overrides`,
],
},
];
logger.info(`Loading Configuration from environment`);
@ -143,12 +220,12 @@ options.forEach(option => {
const parsed_val = parser(env_val);
const value = env_val || option.default;
const value = env_val === undefined ? option.default : parsed_val;
option.validators.for_each(validator => {
let response = null;
if (env_val) response = validator(parsed_val, env_val);
else response = validator(value, value);
if (env_val) response = validator(parsed_val, env_val, options);
else response = validator(value, value, options);
if (response !== true) {
errored = true;

View File

@ -1,10 +1,12 @@
const logger = require('logplease').create('job');
const logplease = require('logplease');
const logger = logplease.create('job');
const { v4: uuidv4 } = require('uuid');
const cp = require('child_process');
const path = require('path');
const config = require('./config');
const globals = require('./globals');
const fs = require('fs/promises');
const fss = require('fs');
const wait_pid = require('waitpid');
const job_states = {
@ -16,30 +18,34 @@ const job_states = {
let uid = 0;
let gid = 0;
let remainingJobSpaces = config.max_concurrent_jobs;
let remaining_job_spaces = config.max_concurrent_jobs;
let jobQueue = [];
setInterval(() => {
// Every 10ms try resolve a new job, if there is an available slot
if(jobQueue.length > 0 && remainingJobSpaces > 0){
jobQueue.shift()()
if (jobQueue.length > 0 && remaining_job_spaces > 0) {
jobQueue.shift()();
}
}, 10)
}, 10);
class Job {
constructor({ runtime, files, args, stdin, timeouts, memory_limits }) {
this.uuid = uuidv4();
this.logger = logplease.create(`job/${this.uuid}`);
this.runtime = runtime;
this.files = files.map((file, i) => ({
name: file.name || `file${i}.code`,
content: file.content,
encoding: ['base64', 'hex', 'utf8'].includes(file.encoding)
? file.encoding
: 'utf8',
}));
this.args = args;
this.stdin = stdin;
this.timeouts = timeouts;
this.memory_limits = memory_limits;
@ -52,6 +58,8 @@ class Job {
uid %= config.runner_uid_max - config.runner_uid_min + 1;
gid %= config.runner_gid_max - config.runner_gid_min + 1;
this.logger.debug(`Assigned uid=${this.uid} gid=${this.gid}`);
this.state = job_states.READY;
this.dir = path.join(
config.data_directory,
@ -61,39 +69,45 @@ class Job {
}
async prime() {
if(remainingJobSpaces < 1){
logger.info(`Awaiting job slot uuid=${this.uuid}`)
await new Promise((resolve)=>{
jobQueue.push(resolve)
})
if (remaining_job_spaces < 1) {
this.logger.info(`Awaiting job slot`);
await new Promise(resolve => {
jobQueue.push(resolve);
});
}
logger.info(`Priming job uuid=${this.uuid}`);
remainingJobSpaces--;
logger.debug('Writing files to job cache');
this.logger.info(`Priming job`);
remaining_job_spaces--;
this.logger.debug('Writing files to job cache');
logger.debug(`Transfering ownership uid=${this.uid} gid=${this.gid}`);
this.logger.debug(`Transfering ownership`);
await fs.mkdir(this.dir, { mode: 0o700 });
await fs.chown(this.dir, this.uid, this.gid);
for (const file of this.files) {
let file_path = path.join(this.dir, file.name);
const file_path = path.join(this.dir, file.name);
const rel = path.relative(this.dir, file_path);
const file_content = Buffer.from(file.content, file.encoding);
if(rel.startsWith(".."))
throw Error(`File path "${file.name}" tries to escape parent directory: ${rel}`)
if (rel.startsWith('..'))
throw Error(
`File path "${file.name}" tries to escape parent directory: ${rel}`
);
await fs.mkdir(path.dirname(file_path), {recursive: true, mode: 0o700})
await fs.mkdir(path.dirname(file_path), {
recursive: true,
mode: 0o700,
});
await fs.chown(path.dirname(file_path), this.uid, this.gid);
await fs.write_file(file_path, file.content);
await fs.write_file(file_path, file_content);
await fs.chown(file_path, this.uid, this.gid);
}
this.state = job_states.PRIMED;
logger.debug('Primed job');
this.logger.debug('Primed job');
}
async safe_call(file, args, timeout, memory_limit, eventBus = null) {
@ -102,26 +116,29 @@ class Job {
const prlimit = [
'prlimit',
'--nproc=' + config.max_process_count,
'--nofile=' + config.max_open_files,
'--fsize=' + config.max_file_size,
'--nproc=' + this.runtime.max_process_count,
'--nofile=' + this.runtime.max_open_files,
'--fsize=' + this.runtime.max_file_size,
];
if (memory_limit >= 0) {
prlimit.push('--as=' + memory_limit);
}
const proc_call = [...prlimit, ...nonetwork, 'bash', file, ...args];
const proc_call = [
'nice',
...prlimit,
...nonetwork,
'bash',
file,
...args,
];
var stdout = '';
var stderr = '';
var output = '';
const proc = cp.spawn(proc_call[0], proc_call.splice(1), {
env: {
...this.runtime.env_vars,
PISTON_LANGUAGE: this.runtime.language,
},
stdio: 'pipe',
cwd: this.dir,
uid: this.uid,
@ -134,31 +151,29 @@ class Job {
proc.stdin.end();
proc.stdin.destroy();
} else {
eventBus.on("stdin", (data) => {
eventBus.on('stdin', data => {
proc.stdin.write(data);
})
});
eventBus.on("kill", (signal) => {
proc.kill(signal)
})
eventBus.on('kill', signal => {
proc.kill(signal);
});
}
const kill_timeout = set_timeout(
async _ => {
logger.info(`Timeout exceeded timeout=${timeout} uuid=${this.uuid}`)
process.kill(proc.pid, 'SIGKILL')
},
timeout
);
const kill_timeout =
(timeout >= 0 &&
set_timeout(async _ => {
this.logger.info(`Timeout exceeded timeout=${timeout}`);
process.kill(proc.pid, 'SIGKILL');
}, timeout)) ||
null;
proc.stderr.on('data', async data => {
if (eventBus !== null) {
eventBus.emit("stderr", data);
} else if (stderr.length > config.output_max_size) {
logger.info(`stderr length exceeded uuid=${this.uuid}`)
process.kill(proc.pid, 'SIGKILL')
eventBus.emit('stderr', data);
} else if (stderr.length > this.runtime.output_max_size) {
this.logger.info(`stderr length exceeded`);
process.kill(proc.pid, 'SIGKILL');
} else {
stderr += data;
output += data;
@ -167,34 +182,34 @@ class Job {
proc.stdout.on('data', async data => {
if (eventBus !== null) {
eventBus.emit("stdout", data);
} else if (stdout.length > config.output_max_size) {
logger.info(`stdout length exceeded uuid=${this.uuid}`)
process.kill(proc.pid, 'SIGKILL')
eventBus.emit('stdout', data);
} else if (stdout.length > this.runtime.output_max_size) {
this.logger.info(`stdout length exceeded`);
process.kill(proc.pid, 'SIGKILL');
} else {
stdout += data;
output += data;
}
});
const exit_cleanup = async () => {
const exit_cleanup = () => {
clear_timeout(kill_timeout);
proc.stderr.destroy();
proc.stdout.destroy();
await this.cleanup_processes()
logger.debug(`Finished exit cleanup uuid=${this.uuid}`)
this.cleanup_processes();
this.logger.debug(`Finished exit cleanup`);
};
proc.on('exit', async (code, signal) => {
await exit_cleanup();
proc.on('exit', (code, signal) => {
exit_cleanup();
resolve({ stdout, stderr, code, signal, output });
});
proc.on('error', async err => {
await exit_cleanup();
proc.on('error', err => {
exit_cleanup();
reject({ error: err, stdout, stderr, output });
});
@ -209,13 +224,13 @@ class Job {
);
}
logger.info(
`Executing job uuid=${this.uuid} uid=${this.uid} gid=${
this.gid
} runtime=${this.runtime.toString()}`
);
this.logger.info(`Executing job runtime=${this.runtime.toString()}`);
logger.debug('Compiling');
const code_files =
(this.runtime.language === 'file' && this.files) ||
this.files.filter(file => file.encoding == 'utf8');
this.logger.debug('Compiling');
let compile;
@ -228,11 +243,11 @@ class Job {
);
}
logger.debug('Running');
this.logger.debug('Running');
const run = await this.safe_call(
this.runtime.run,
[this.files[0].name, ...this.args],
[code_files[0].name, ...this.args],
this.timeouts.run,
this.memory_limits.run
);
@ -255,84 +270,98 @@ class Job {
);
}
logger.info(
`Interactively executing job uuid=${this.uuid} uid=${this.uid} gid=${
this.gid
} runtime=${this.runtime.toString()}`
this.logger.info(
`Interactively executing job runtime=${this.runtime.toString()}`
);
const code_files =
(this.runtime.language === 'file' && this.files) ||
this.files.filter(file => file.encoding == 'utf8');
if (this.runtime.compiled) {
eventBus.emit("stage", "compile")
eventBus.emit('stage', 'compile');
const { error, code, signal } = await this.safe_call(
this.runtime.compile,
this.files.map(x => x.name),
path.join(this.runtime.pkgdir, 'compile'),
code_files.map(x => x.name),
this.timeouts.compile,
this.memory_limits.compile,
eventBus
)
);
eventBus.emit("exit", "compile", {error, code, signal})
eventBus.emit('exit', 'compile', { error, code, signal });
}
logger.debug('Running');
eventBus.emit("stage", "run")
this.logger.debug('Running');
eventBus.emit('stage', 'run');
const { error, code, signal } = await this.safe_call(
this.runtime.run,
[this.files[0].name, ...this.args],
path.join(this.runtime.pkgdir, 'run'),
[code_files[0].name, ...this.args],
this.timeouts.run,
this.memory_limits.run,
eventBus
);
eventBus.emit("exit", "run", {error, code, signal})
eventBus.emit('exit', 'run', { error, code, signal });
this.state = job_states.EXECUTED;
}
async cleanup_processes(dont_wait = []) {
cleanup_processes(dont_wait = []) {
let processes = [1];
logger.debug(`Cleaning up processes uuid=${this.uuid}`)
const to_wait = [];
this.logger.debug(`Cleaning up processes`);
while (processes.length > 0) {
processes = []
processes = [];
const proc_ids = fss.readdir_sync('/proc');
const proc_ids = await fs.readdir("/proc");
processes = await Promise.all(proc_ids.map(async (proc_id) => {
processes = proc_ids.map(proc_id => {
if (isNaN(proc_id)) return -1;
try {
const proc_status = await fs.read_file(path.join("/proc",proc_id,"status"));
const proc_lines = proc_status.to_string().split("\n")
const uid_line = proc_lines.find(line=>line.starts_with("Uid:"))
const proc_status = fss.read_file_sync(
path.join('/proc', proc_id, 'status')
);
const proc_lines = proc_status.to_string().split('\n');
const state_line = proc_lines.find(line =>
line.starts_with('State:')
);
const uid_line = proc_lines.find(line =>
line.starts_with('Uid:')
);
const [_, ruid, euid, suid, fuid] = uid_line.split(/\s+/);
const [_1, state, user_friendly] = state_line.split(/\s+/);
if (state == 'Z')
// Zombie process, just needs to be waited
return -1;
// We should kill in all other state (Sleep, Stopped & Running)
if (ruid == this.uid || euid == this.uid)
return parse_int(proc_id)
return parse_int(proc_id);
} catch {
return -1
return -1;
}
return -1
}))
return -1;
});
processes = processes.filter(p => p > 0)
processes = processes.filter(p => p > 0);
if (processes.length > 0)
logger.debug(`Got processes to kill: ${processes} uuid=${this.uuid}`)
this.logger.debug(`Got processes to kill: ${processes}`);
for (const proc of processes) {
// First stop the processes, but keep their resources allocated so they cant re-fork
try {
process.kill(proc, 'SIGSTOP');
} catch {
} catch (e) {
// Could already be dead
this.logger.debug(
`Got error while SIGSTOPping process ${proc}:`,
e
);
}
}
@ -342,14 +371,27 @@ class Job {
process.kill(proc, 'SIGKILL');
} catch {
// Could already be dead and just needs to be waited on
this.logger.debug(
`Got error while SIGKILLing process ${proc}:`,
e
);
}
if(!dont_wait.includes(proc))
to_wait.push(proc);
}
}
this.logger.debug(
`Finished kill-loop, calling wait_pid to end any zombie processes`
);
for (const proc of to_wait) {
if (dont_wait.includes(proc)) continue;
wait_pid(proc);
}
}
logger.debug(`Cleaned up processes uuid=${this.uuid}`)
this.logger.debug(`Cleaned up processes`);
}
async cleanup_filesystem() {
@ -370,7 +412,7 @@ class Job {
}
} catch (e) {
// File was somehow deleted in the time that we read the dir to when we checked the file
logger.warn(`Error removing file ${file_path}: ${e}`);
this.logger.warn(`Error removing file ${file_path}: ${e}`);
}
}
}
@ -379,15 +421,15 @@ class Job {
}
async cleanup() {
logger.info(`Cleaning up job uuid=${this.uuid}`);
this.logger.info(`Cleaning up job`);
this.cleanup_processes(); // Run process janitor, just incase there are any residual processes somehow
await this.cleanup_filesystem();
remainingJobSpaces++;
remaining_job_spaces++;
}
}
module.exports = {
Job,
};

View File

@ -7,11 +7,33 @@ const path = require('path');
const runtimes = [];
class Runtime {
constructor({ language, version, aliases, runtime, run, compile, packageSupport, flake_path }) {
constructor({
language,
version,
aliases,
runtime,
run,
compile,
packageSupport,
flake_path,
timeouts,
memory_limits,
max_process_count,
max_open_files,
max_file_size,
output_max_size,
}) {
this.language = language;
this.runtime = runtime;
this.timeouts = timeouts;
this.memory_limits = memory_limits;
this.max_process_count = max_process_count;
this.max_open_files = max_open_files;
this.max_file_size = max_file_size;
this.output_max_size = output_max_size;
this.aliases = aliases;
this.version = version;
@ -22,6 +44,69 @@ class Runtime {
this.package_support = packageSupport;
}
static compute_single_limit(
language_name,
limit_name,
language_limit_overrides
) {
return (
(config.limit_overrides[language_name] &&
config.limit_overrides[language_name][limit_name]) ||
(language_limit_overrides &&
language_limit_overrides[limit_name]) ||
config[limit_name]
);
}
static compute_all_limits(language_name, language_limit_overrides) {
return {
timeouts: {
compile: this.compute_single_limit(
language_name,
'compile_timeout',
language_limit_overrides
),
run: this.compute_single_limit(
language_name,
'run_timeout',
language_limit_overrides
),
},
memory_limits: {
compile: this.compute_single_limit(
language_name,
'compile_memory_limit',
language_limit_overrides
),
run: this.compute_single_limit(
language_name,
'run_memory_limit',
language_limit_overrides
),
},
max_process_count: this.compute_single_limit(
language_name,
'max_process_count',
language_limit_overrides
),
max_open_files: this.compute_single_limit(
language_name,
'max_open_files',
language_limit_overrides
),
max_file_size: this.compute_single_limit(
language_name,
'max_file_size',
language_limit_overrides
),
output_max_size: this.compute_single_limit(
language_name,
'output_max_size',
language_limit_overrides
),
};
}
ensure_built() {
logger.info(`Ensuring ${this} is built`);
@ -29,34 +114,35 @@ class Runtime {
function _ensure_built(key) {
const command = `nix build ${flake_path}.metadata.${key} --no-link`;
cp.execSync(command, {stdio: "pipe"})
cp.execSync(command, { stdio: 'pipe' });
}
_ensure_built("run");
if(this.compiled) _ensure_built("compile");
logger.debug(`Finished ensuring ${this} is installed`)
_ensure_built('run');
if (this.compiled) _ensure_built('compile');
logger.debug(`Finished ensuring ${this} is installed`);
}
static load_runtime(flake_key) {
logger.info(`Loading ${flake_key}`)
logger.info(`Loading ${flake_key}`);
const flake_path = `${config.flake_path}#pistonRuntimeSets.${config.runtime_set}.${flake_key}`;
const metadata_command = `nix eval --json ${flake_path}.metadata`;
const metadata = JSON.parse(cp.execSync(metadata_command));
const this_runtime = new Runtime({
...metadata,
flake_path
...Runtime.compute_all_limits(
metadata.language,
metadata.limitOverrides
),
flake_path,
});
this_runtime.ensure_built();
runtimes.push(this_runtime);
logger.debug(`Package ${flake_key} was loaded`);
}
get compiled() {
@ -70,10 +156,8 @@ class Runtime {
toString() {
return `${this.language}-${this.version}`;
}
}
module.exports = runtimes;
module.exports.Runtime = Runtime;
module.exports.load_runtime = Runtime.load_runtime;

1
cli/.gitignore vendored
View File

@ -1 +0,0 @@
node_modules

View File

@ -3,8 +3,44 @@ const path = require('path');
const chalk = require('chalk');
const WebSocket = require('ws');
const SIGNALS = ["SIGABRT","SIGALRM","SIGBUS","SIGCHLD","SIGCLD","SIGCONT","SIGEMT","SIGFPE","SIGHUP","SIGILL","SIGINFO","SIGINT","SIGIO","SIGIOT","SIGLOST","SIGPIPE","SIGPOLL","SIGPROF","SIGPWR","SIGQUIT","SIGSEGV","SIGSTKFLT","SIGTSTP","SIGSYS","SIGTERM","SIGTRAP","SIGTTIN","SIGTTOU","SIGUNUSED","SIGURG","SIGUSR1","SIGUSR2","SIGVTALRM","SIGXCPU","SIGXFSZ","SIGWINCH"]
const SIGNALS = [
'SIGABRT',
'SIGALRM',
'SIGBUS',
'SIGCHLD',
'SIGCLD',
'SIGCONT',
'SIGEMT',
'SIGFPE',
'SIGHUP',
'SIGILL',
'SIGINFO',
'SIGINT',
'SIGIO',
'SIGIOT',
'SIGLOST',
'SIGPIPE',
'SIGPOLL',
'SIGPROF',
'SIGPWR',
'SIGQUIT',
'SIGSEGV',
'SIGSTKFLT',
'SIGTSTP',
'SIGSYS',
'SIGTERM',
'SIGTRAP',
'SIGTTIN',
'SIGTTOU',
'SIGUNUSED',
'SIGURG',
'SIGUSR1',
'SIGUSR2',
'SIGVTALRM',
'SIGXCPU',
'SIGXFSZ',
'SIGWINCH',
];
exports.command = ['execute <language> <file> [args..]'];
exports.aliases = ['run'];
@ -15,18 +51,18 @@ exports.builder = {
string: true,
desc: 'Set the version of the language to use',
alias: ['l'],
default: '*'
default: '*',
},
stdin: {
boolean: true,
desc: 'Read input from stdin and pass to executor',
alias: ['i']
alias: ['i'],
},
run_timeout: {
alias: ['rt', 'r'],
number: true,
desc: 'Milliseconds before killing run process',
default: 3000
default: 3000,
},
compile_timeout: {
alias: ['ct', 'c'],
@ -42,117 +78,126 @@ exports.builder = {
interactive: {
boolean: true,
alias: ['t'],
desc: 'Run interactively using WebSocket transport'
desc: 'Run interactively using WebSocket transport',
},
status: {
boolean: true,
alias: ['s'],
desc: 'Output additional status to stderr'
}
desc: 'Output additional status to stderr',
},
};
async function handle_interactive(files, argv) {
const ws = new WebSocket(argv.pistonUrl.replace("http", "ws") + "/api/v2/connect")
const ws = new WebSocket(
argv.pistonUrl.replace('http', 'ws') + '/api/v2/connect'
);
const log_message = (process.stderr.isTTY && argv.status) ? console.error : ()=>{};
const log_message =
process.stderr.isTTY && argv.status ? console.error : () => {};
process.on("exit", ()=>{
process.on('exit', () => {
ws.close();
process.stdin.end();
process.stdin.destroy();
process.exit();
})
});
for (const signal of SIGNALS) {
process.on(signal, () => {
ws.send(JSON.stringify({type: 'signal', signal}))
})
ws.send(JSON.stringify({ type: 'signal', signal }));
});
}
ws.on('open', () => {
const request = {
type: "init",
type: 'init',
language: argv.language,
version: argv['language_version'],
files: files,
args: argv.args,
compile_timeout: argv.ct,
run_timeout: argv.rt
}
run_timeout: argv.rt,
};
ws.send(JSON.stringify(request))
log_message(chalk.white.bold("Connected"))
ws.send(JSON.stringify(request));
log_message(chalk.white.bold('Connected'));
process.stdin.resume();
process.stdin.on("data", (data) => {
ws.send(JSON.stringify({
type: "data",
stream: "stdin",
data: data.toString()
}))
})
process.stdin.on('data', data => {
ws.send(
JSON.stringify({
type: 'data',
stream: 'stdin',
data: data.toString(),
})
);
});
});
ws.on("close", (code, reason)=>{
ws.on('close', (code, reason) => {
log_message(
chalk.white.bold("Disconnected: "),
chalk.white.bold("Reason: "),
chalk.white.bold('Disconnected: '),
chalk.white.bold('Reason: '),
chalk.yellow(`"${reason}"`),
chalk.white.bold("Code: "),
chalk.yellow(`"${code}"`),
)
process.stdin.pause()
})
chalk.white.bold('Code: '),
chalk.yellow(`"${code}"`)
);
process.stdin.pause();
});
ws.on('message', function (data) {
const msg = JSON.parse(data);
switch (msg.type) {
case "runtime":
log_message(chalk.bold.white("Runtime:"), chalk.yellow(`${msg.language} ${msg.version}`))
case 'runtime':
log_message(
chalk.bold.white('Runtime:'),
chalk.yellow(`${msg.language} ${msg.version}`)
);
break;
case "stage":
log_message(chalk.bold.white("Stage:"), chalk.yellow(msg.stage))
case 'stage':
log_message(
chalk.bold.white('Stage:'),
chalk.yellow(msg.stage)
);
break;
case "data":
if(msg.stream == "stdout") process.stdout.write(msg.data)
else if(msg.stream == "stderr") process.stderr.write(msg.data)
else log_message(chalk.bold.red(`(${msg.stream}) `), msg.data)
case 'data':
if (msg.stream == 'stdout') process.stdout.write(msg.data);
else if (msg.stream == 'stderr') process.stderr.write(msg.data);
else log_message(chalk.bold.red(`(${msg.stream}) `), msg.data);
break;
case "exit":
case 'exit':
if (msg.signal === null)
log_message(
chalk.white.bold("Stage"),
chalk.white.bold('Stage'),
chalk.yellow(msg.stage),
chalk.white.bold("exited with code"),
chalk.white.bold('exited with code'),
chalk.yellow(msg.code)
)
);
else
log_message(
chalk.white.bold("Stage"),
chalk.white.bold('Stage'),
chalk.yellow(msg.stage),
chalk.white.bold("exited with signal"),
chalk.white.bold('exited with signal'),
chalk.yellow(msg.signal)
)
);
break;
default:
log_message(chalk.red.bold("Unknown message:"), msg)
log_message(chalk.red.bold('Unknown message:'), msg);
}
})
});
}
async function run_non_interactively(files, argv) {
const stdin = (argv.stdin && await new Promise((resolve, _) => {
const stdin =
(argv.stdin &&
(await new Promise((resolve, _) => {
let data = '';
process.stdin.on('data', d => data += d);
process.stdin.on('data', d => (data += d));
process.stdin.on('end', _ => resolve(data));
})) || '';
}))) ||
'';
const request = {
language: argv.language,
@ -161,7 +206,7 @@ async function run_non_interactively(files, argv) {
args: argv.args,
stdin,
compile_timeout: argv.ct,
run_timeout: argv.rt
run_timeout: argv.rt,
};
let { data: response } = await argv.axios.post('/api/v2/execute', request);
@ -170,13 +215,13 @@ async function run_non_interactively(files, argv) {
console.log(chalk.bold(`== ${name} ==`));
if (ctx.stdout) {
console.log(chalk.bold(`STDOUT`))
console.log(ctx.stdout.replace(/\n/g,'\n '))
console.log(chalk.bold(`STDOUT`));
console.log(ctx.stdout.replace(/\n/g, '\n '));
}
if (ctx.stderr) {
console.log(chalk.bold(`STDERR`))
console.log(ctx.stderr.replace(/\n/g,'\n '))
console.log(chalk.bold(`STDERR`));
console.log(ctx.stderr.replace(/\n/g, '\n '));
}
if (ctx.code) {
@ -187,12 +232,9 @@ async function run_non_interactively(files, argv) {
}
if (ctx.signal) {
console.log(
chalk.bold(`Signal:`),
chalk.bold.yellow(ctx.signal)
);
}
console.log(chalk.bold(`Signal:`), chalk.bold.yellow(ctx.signal));
}
};
if (response.compile) {
step('Compile', response.compile);
@ -201,17 +243,23 @@ async function run_non_interactively(files, argv) {
step('Run', response.run);
}
exports.handler = async (argv) => {
const files = [...(argv.files || []),argv.file]
.map(file_path => {
exports.handler = async argv => {
const files = [...(argv.files || []), argv.file].map(file_path => {
const buffer = fs.readFileSync(file_path);
const encoding =
(buffer
.toString()
.split('')
.some(x => x.charCodeAt(0) >= 128) &&
'base64') ||
'utf8';
return {
name: path.basename(file_path),
content: fs.readFileSync(file_path).toString()
content: buffer.toString(encoding),
encoding,
};
});
if (argv.interactive) await handle_interactive(files, argv);
else await run_non_interactively(files, argv);
}
};

View File

@ -6,8 +6,8 @@ const axios_instance = argv => {
argv.axios = axios.create({
baseURL: argv['piston-url'],
headers: {
'Content-Type': 'application/json'
}
'Content-Type': 'application/json',
},
});
return argv;
@ -18,12 +18,11 @@ require('yargs')(process.argv.slice(2))
alias: ['u'],
default: 'http://127.0.0.1:2000',
desc: 'Piston API URL',
string: true
string: true,
})
.middleware(axios_instance)
.scriptName('piston')
.commandDir('commands')
.demandCommand()
.help()
.wrap(72)
.argv;
.wrap(72).argv;

View File

@ -60,6 +60,7 @@ Runs the given code, using the given runtime and arguments, returning the result
- `files`: An array of files which should be uploaded into the job context
- `files[].name` (_optional_): Name of file to be written, if none a random name is picked
- `files[].content`: Content of file to be written
- `files[].encoding` (_optional_): The encoding scheme used for the file content. One of `base64`, `hex` or `utf8`. Defaults to `utf8`.
- `stdin` (_optional_): Text to pass into stdin of the program. Defaults to blank string.
- `args` (_optional_): Arguments to pass to the program. Defaults to none
- `run_timeout` (_optional_): The maximum allowed time in milliseconds for the compile stage to finish before bailing out. Must be a number, less than or equal to the configured maximum timeout.

View File

@ -86,11 +86,11 @@ key: PISTON_MAX_PROCESS_COUNT
default: 64
```
Maximum number of processess allowed to to have open for a job.
Maximum number of processes allowed to to have open for a job.
Resists against exhausting the process table, causing a full system lockup.
## Output Max Side
## Output Max Size
```yaml
key: PISTON_OUTPUT_MAX_SIZE
@ -123,6 +123,21 @@ Maximum size for a singular file written to disk.
Resists against large file writes to exhaust disk space.
## Compile/Run timeouts
```yaml
key:
- PISTON_COMPILE_TIMEOUT
default: 10000
key:
- PISTON_RUN_TIMEOUT
default: 3000
```
The maximum time that is allowed to be taken by a stage in milliseconds.
Use -1 for unlimited time.
## Compile/Run memory limits
```yaml
@ -154,3 +169,19 @@ default: 64
```
Maximum number of jobs to run concurrently.
## Limit overrides
```yaml
key: PISTON_LIMIT_OVERRIDES
default: {}
```
Per-language overrides/exceptions for the each of `max_process_count`, `max_open_files`, `max_file_size`,
`compile_memory_limit`, `run_memory_limit`, `compile_timeout`, `run_timeout`, `output_max_size`. Defined as follows:
```
PISTON_LIMIT_OVERRIDES={"c++":{"max_process_count":128}}
```
This will give `c++` a max_process_count of 128 regardless of the configuration.

View File

@ -1 +1 @@
mkdocs==1.1.2
mkdocs==1.2.3

View File

@ -21,6 +21,7 @@
compile? null,
packages? null,
aliases? [],
limitOverrides? {},
tests
}: let
compileFile = if compile != null then
@ -28,7 +29,7 @@
else null;
runFile = pkgs.writeShellScript "run" run;
metadata = {
inherit language version runtime aliases;
inherit language version runtime aliases limitOverrides;
run = runFile;
compile = compileFile;
packageSupport = packages != null;

32
package-lock.json generated Normal file
View File

@ -0,0 +1,32 @@
{
"name": "piston",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"devDependencies": {
"prettier": "2.4.1"
}
},
"node_modules/prettier": {
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.4.1.tgz",
"integrity": "sha512-9fbDAXSBcc6Bs1mZrDYb3XKzDLm4EXXL9sC1LqKP5rZkT6KRr/rf9amVUcODVXgguK/isJz0d0hP72WeaKWsvA==",
"dev": true,
"bin": {
"prettier": "bin-prettier.js"
},
"engines": {
"node": ">=10.13.0"
}
}
},
"dependencies": {
"prettier": {
"version": "2.4.1",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-2.4.1.tgz",
"integrity": "sha512-9fbDAXSBcc6Bs1mZrDYb3XKzDLm4EXXL9sC1LqKP5rZkT6KRr/rf9amVUcODVXgguK/isJz0d0hP72WeaKWsvA==",
"dev": true
}
}
}

5
package.json Normal file
View File

@ -0,0 +1,5 @@
{
"devDependencies": {
"prettier": "2.4.1"
}
}

15
packages/befunge93/0.2.0/build.sh vendored Normal file
View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
# source python 2.7
source ../../python/2.7.18/build.sh
# clone befunge repo
git clone -q 'https://github.com/programble/befungee' befunge93
# go inside befunge93 so we can checkout
cd befunge93
# checkout the version 0.2.0
git checkout tag/v0.2.0
cd ..

View File

@ -2,3 +2,4 @@
# Put 'export' statements here for environment variables
export PATH=$PWD/bin:$PATH
export BEFUNGE93_PATH=$PWD/befunge93

View File

@ -0,0 +1,5 @@
{
"language": "befunge93",
"version": "0.2.0",
"aliases": ["b93"]
}

4
packages/befunge93/0.2.0/run vendored Normal file
View File

@ -0,0 +1,4 @@
#!/usr/bin/env bash
# run the befunge program with the file name
python2.7 "$BEFUNGE93_PATH"/befungee.py "$1"

1
packages/befunge93/0.2.0/test.b93 vendored Normal file
View File

@ -0,0 +1 @@
64+"KO">:#,_@

20
packages/brachylog/1.0.0/build.sh vendored Normal file
View File

@ -0,0 +1,20 @@
#!/usr/bin/env bash
# build prolog 8.2.4 as dependency
source ../../prolog/8.2.4/build.sh
# curl brachylog 1.0.0
curl -L "https://github.com/JCumin/Brachylog/archive/refs/tags/v1.0-ascii.tar.gz" -o brachylog.tar.gz
tar xzf brachylog.tar.gz --strip-components=1
rm brachylog.tar.gz
# move swi prolog to working directory
cp bin/swipl swipl
# give execution permission to swipl
chmod +x swipl
# add some code the branchylog.pl so we don't have to escape backslashes while using the interactive mode
echo '
:-feature(argv, [Code, Stdin]), run_from_atom(Code, Stdin, _), halt.' >> prolog_parser/brachylog.pl

View File

@ -2,3 +2,4 @@
# Put 'export' statements here for environment variables
export PATH=$PWD/bin:$PATH
export BRACHYLOG_PATH=$PWD

View File

@ -0,0 +1,5 @@
{
"language": "brachylog",
"version": "1.0.0",
"aliases": []
}

19
packages/brachylog/1.0.0/run vendored Normal file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env bash
# save the file for later
file="$1"
# remove the file from $@
shift
# save stdin as $@ joined by newlines
stdin=`printf "%s\n" "$@"`
# save code as the contents of $file
code=`cat "$file"`
# go to the directory where brachylog.pl is so the imports work
cd "$BRACHYLOG_PATH"/prolog_parser
# run swi prolog with code and stdin
swipl -f brachylog.pl "$code" "$stdin"

View File

@ -0,0 +1 @@
"OK"w

View File

@ -1,7 +0,0 @@
#!/bin/bash
PREFIX=$(realpath $(dirname $0))
curl -L "https://github.com/crystal-lang/crystal/releases/download/0.36.1/crystal-0.36.1-1-linux-x86_64.tar.gz" -o crystal.tar.gz
tar xzf crystal.tar.gz --strip-components=1
rm crystal.tar.gz

View File

@ -1,5 +0,0 @@
#!/usr/bin/env bash
# Compile crystal files into out file
crystal build "$@" -o out --no-color && \
chmod +x out

View File

@ -1 +0,0 @@
export PATH=$PWD/bin:$PATH

View File

@ -1,5 +0,0 @@
{
"language": "crystal",
"version": "0.36.1",
"aliases": ["crystal", "cr"]
}

View File

@ -1,4 +0,0 @@
#!/bin/bash
shift # Filename is only used to compile
./out "$@"

View File

@ -1 +0,0 @@
puts("OK")

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bash
curl -L "https://storage.googleapis.com/dart-archive/channels/stable/release/2.12.1/sdk/dartsdk-linux-x64-release.zip" -o dart.zip
unzip dart.zip
rm dart.zip
cp -r dart-sdk/* .
rm -rf dart-sdk
chmod -R +rx bin

View File

@ -1,5 +0,0 @@
{
"language": "dart",
"version": "2.12.1",
"aliases": []
}

View File

@ -1,4 +0,0 @@
#!/usr/bin/env bash
# Put instructions to run the runtime
dart run "$@"

View File

@ -1,3 +0,0 @@
void main() {
print('OK');
}

View File

@ -1,19 +0,0 @@
#!/usr/bin/env bash
# Put instructions to build your package in here
PREFIX=$(realpath $(dirname $0))
mkdir -p build
cd build
curl "http://gondor.apana.org.au/~herbert/dash/files/dash-0.5.11.tar.gz" -o dash.tar.gz
tar xzf dash.tar.gz --strip-components=1
./configure --prefix "$PREFIX" &&
make -j$(nproc) &&
make install -j$(nproc)
cd ../
rm -rf build

View File

@ -1,5 +0,0 @@
{
"language": "dash",
"version": "0.5.11",
"aliases": ["dash"]
}

View File

@ -1,4 +0,0 @@
#!/usr/bin/env bash
# Put instructions to run the runtime
dash "$@"

View File

@ -1 +0,0 @@
echo "OK"

View File

@ -1,5 +0,0 @@
curl -L https://github.com/denoland/deno/releases/download/v1.7.5/deno-x86_64-unknown-linux-gnu.zip --output deno.zip
unzip -o deno.zip
rm deno.zip
chmod +x deno

View File

@ -1 +0,0 @@
export PATH=$PWD:$PATH

View File

@ -1,14 +0,0 @@
{
"language": "deno",
"version": "1.7.5",
"provides": [
{
"language": "typescript",
"aliases": ["deno-ts","deno"]
},
{
"language": "javascript",
"aliases": ["deno-js"]
}
]
}

View File

@ -1,2 +0,0 @@
#!/bin/bash
DENO_DIR=$PWD deno run "$@"

View File

@ -1 +0,0 @@
console.log("OK")

4
packages/dotnet/5.0.201/build.sh vendored Normal file → Executable file
View File

@ -7,8 +7,10 @@ rm dotnet.tar.gz
# Cache nuget packages
export DOTNET_CLI_HOME=$PWD
./dotnet new console -o cache_application
./dotnet new console -lang F# -o fs_cache_application
./dotnet new console -lang VB -o vb_cache_application
# This calls a restore on the global-packages index ($DOTNET_CLI_HOME/.nuget/packages)
# If we want to allow more packages, we could add them to this cache_application
rm -rf cache_application
rm -rf cache_application fs_cache_application vb_cache_application
# Get rid of it, we don't actually need the application - just the restore

View File

@ -1,15 +1,36 @@
#!/usr/bin/env bash
[ "${PISTON_LANGUAGE}" == "fsi" ] && exit 0
export DOTNET_CLI_HOME=$PWD
export HOME=$PWD
rename 's/$/\.cs/' "$@" # Add .cs extension
dotnet build --help > /dev/null # Shut the thing up
case "${PISTON_LANGUAGE}" in
basic.net)
rename 's/$/\.vb/' "$@" # Add .vb extension
dotnet new console -lang VB -o . --no-restore
rm Program.vb
;;
fsharp.net)
first_file=$1
shift
rename 's/$/\.fs/' "$@" # Add .fs extension
dotnet new console -lang F# -o . --no-restore
mv $first_file Program.fs # For some reason F#.net doesn't work unless the file name is Program.fs
;;
csharp.net)
rename 's/$/\.cs/' "$@" # Add .cs extension
dotnet new console -o . --no-restore
rm Program.cs
;;
*)
echo "How did you get here? (${PISTON_LANGUAGE})"
exit 1
;;
esac
dotnet restore --source $DOTNET_ROOT/.nuget/packages
dotnet build --no-restore

View File

@ -3,3 +3,4 @@
# Put 'export' statements here for environment variables
export DOTNET_ROOT=$PWD
export PATH=$DOTNET_ROOT:$PATH
export FSI_PATH=$(find $(pwd) -name fsi.dll)

View File

@ -1,5 +1,66 @@
{
"language": "dotnet",
"version": "5.0.201",
"aliases": ["cs", "csharp"]
"provides": [
{
"language": "basic.net",
"aliases": [
"basic",
"visual-basic",
"visual-basic.net",
"vb",
"vb.net",
"vb-dotnet",
"dotnet-vb",
"basic-dotnet",
"dotnet-basic"
],
"limit_overrides": { "max_process_count": 128 }
},
{
"language": "fsharp.net",
"aliases": [
"fsharp",
"fs",
"f#",
"fs.net",
"f#.net",
"fsharp-dotnet",
"fs-dotnet",
"f#-dotnet",
"dotnet-fsharp",
"dotnet-fs",
"dotnet-fs"
],
"limit_overrides": { "max_process_count": 128 }
},
{
"language": "csharp.net",
"aliases": [
"csharp",
"c#",
"cs",
"c#.net",
"cs.net",
"c#-dotnet",
"cs-dotnet",
"csharp-dotnet",
"dotnet-c#",
"dotnet-cs",
"dotnet-csharp"
],
"limit_overrides": { "max_process_count": 128 }
},
{
"language": "fsi",
"aliases": [
"fsx",
"fsharp-interactive",
"f#-interactive",
"dotnet-fsi",
"fsi-dotnet",
"fsi.net"
]
}
]
}

View File

@ -3,5 +3,23 @@
# Put instructions to run the runtime
export DOTNET_CLI_HOME=$PWD
case "${PISTON_LANGUAGE}" in
basic.net)
;&
fsharp.net)
;&
csharp.net)
shift
dotnet bin/Debug/net5.0/$(basename $(realpath .)).dll "$@"
;;
fsi)
FILENAME=$1
rename 's/$/\.fsx/' $FILENAME # Add .fsx extension
shift
dotnet $FSI_PATH $FILENAME.fsx "$@"
;;
*)
echo "How did you get here? (${PISTON_LANGUAGE})"
exit 1
;;
esac

6
packages/dotnet/5.0.201/test.fs vendored Normal file
View File

@ -0,0 +1,6 @@
open System
[<EntryPoint>]
let main argv =
printfn "OK"
0

1
packages/dotnet/5.0.201/test.fsx vendored Normal file
View File

@ -0,0 +1 @@
printfn "OK"

9
packages/dotnet/5.0.201/test.vb vendored Normal file
View File

@ -0,0 +1,9 @@
Imports System
Module Module1
Sub Main()
Console.WriteLine("OK")
End Sub
End Module

View File

@ -1,25 +0,0 @@
#!/bin/bash
source ../../erlang/23.0.0/build.sh
export PATH=$PWD/bin:$PATH
PREFIX=$(realpath $(dirname $0))
mkdir -p build
cd build
curl -L "https://github.com/elixir-lang/elixir/archive/v1.11.3.tar.gz" -o elixir.tar.gz
tar xzf elixir.tar.gz --strip-components=1
rm elixir.tar.gz
./configure --prefix "$PREFIX"
make -j$(nproc)
cd ..
cp -r build/bin .
cp -r build/lib .
rm -rf build

View File

@ -1,5 +0,0 @@
#!/usr/bin/env bash
# Put 'export' statements here for environment variables
export LC_ALL=en_US.UTF-8
export PATH=$PWD/bin:$PATH

View File

@ -1,5 +0,0 @@
{
"language": "elixir",
"version": "1.11.3",
"aliases": ["elixir", "exs"]
}

View File

@ -1,4 +0,0 @@
#!/bin/bash
# Put instructions to run the runtime
elixir "$@"

View File

@ -1 +0,0 @@
IO.puts("OK")

View File

@ -1,21 +0,0 @@
#!/bin/bash
PREFIX=$(realpath $(dirname $0))
mkdir -p build
cd build
curl "http://erlang.org/download/otp_src_23.0.tar.gz" -o erlang.tar.gz
tar xzf erlang.tar.gz --strip-components=1
rm erlang.tar.gz
export ERL_TOP=$(pwd)
./configure --prefix "$PREFIX"
make -j$(nproc)
make install -j$(nproc)
cd ..
rm -rf build

View File

@ -1,4 +0,0 @@
#!/usr/bin/env bash
# Put 'export' statements here for environment variables
export PATH=$PWD/bin:$PATH

View File

@ -1,5 +0,0 @@
{
"language": "erlang",
"version": "23.0.0",
"aliases": ["erlang", "erl", "escript"]
}

View File

@ -1,4 +0,0 @@
#!/bin/bash
# Put instructions to run the runtime
escript "$@"

View File

@ -1,3 +0,0 @@
main(_) ->
io:format("OK~n").

View File

@ -1,21 +0,0 @@
#!/usr/bin/env bash
# Put instructions to build your package in here
PREFIX=$(realpath $(dirname $0))
mkdir -p build
cd build
curl "https://ftp.gnu.org/gnu/gawk/gawk-5.1.0.tar.gz" -o gawk.tar.gz
tar xzf gawk.tar.gz --strip-components=1
# === autoconf based ===
./configure --prefix "$PREFIX"
make -j$(nproc)
make install -j$(nproc)
cd ../
rm -rf build

View File

@ -1,4 +0,0 @@
#!/usr/bin/env bash
# Put 'export' statements here for environment variables
export PATH=$PWD/bin:$PATH

View File

@ -1,10 +0,0 @@
{
"language": "gawk",
"version": "5.1.0",
"provides": [
{
"language": "awk",
"aliases": ["gawk"]
}
]
}

View File

@ -1,4 +0,0 @@
#!/usr/bin/env bash
# Put instructions to run the runtime
gawk-5.1.0 -f "$@"

View File

@ -1 +0,0 @@
{print "OK"}

14
packages/husk/1.0.0/build.sh vendored Normal file
View File

@ -0,0 +1,14 @@
#!/usr/bin/env bash
cp ../../haskell/9.0.1/build.sh ./haskell-build.sh
sed -Ei 's/9\.0\.1/8\.10\.7/g' ./haskell-build.sh
source ./haskell-build.sh
# compile Husk from source
git clone -q "https://github.com/barbuz/husk.git"
cd husk
../bin/ghc -O2 Husk
# cleanup
cd ..
rm -f haskell-build.sh

6
packages/husk/1.0.0/environment vendored Normal file
View File

@ -0,0 +1,6 @@
#!/usr/bin/env bash
# haskell and husk path
export PATH=$PWD/bin:$PATH
export HUSK_PATH=$PWD/husk
export LANG=en_US.UTF8

5
packages/husk/1.0.0/metadata.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"language": "husk",
"version": "1.0.0",
"aliases": []
}

10
packages/husk/1.0.0/run vendored Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# Store the current path because we'll need it to run the program file
PROGRAM_PATH=$PWD
# For now, Husk can only be run within the folder that has the imported modules
cd $HUSK_PATH
# Run Husk from file in unicode format with the given args
./Husk -uf "${PROGRAM_PATH}/${@}"

1
packages/husk/1.0.0/test.husk vendored Normal file
View File

@ -0,0 +1 @@
"OK

6
packages/llvm_ir/12.0.1/build.sh vendored Executable file
View File

@ -0,0 +1,6 @@
#!/usr/bin/env bash
curl -L "https://github.com/llvm/llvm-project/releases/download/llvmorg-12.0.1/clang+llvm-12.0.1-x86_64-linux-gnu-ubuntu-16.04.tar.xz" -o llvm-ir.tar.xz
tar xf llvm-ir.tar.xz clang+llvm-12.0.1-x86_64-linux-gnu-ubuntu-/bin --strip-components=1
rm llvm-ir.tar.xz

4
packages/llvm_ir/12.0.1/compile vendored Executable file
View File

@ -0,0 +1,4 @@
#!/usr/bin/env bash
llc "$@" -o binary.s
clang binary.s -o binary

2
packages/llvm_ir/12.0.1/environment vendored Normal file
View File

@ -0,0 +1,2 @@
#!/usr/bin/env bash
export PATH=$PWD/bin:$PATH

5
packages/llvm_ir/12.0.1/metadata.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"language": "llvm_ir",
"version": "12.0.1",
"aliases": ["llvm", "llvm-ir", "ll"]
}

4
packages/llvm_ir/12.0.1/run vendored Normal file
View File

@ -0,0 +1,4 @@
#!/usr/bin/env bash
shift
binary "$@"

10
packages/llvm_ir/12.0.1/test.ll vendored Normal file
View File

@ -0,0 +1,10 @@
@.str = private unnamed_addr constant [2 x i8] c"OK"
declare i32 @puts(i8* nocapture) nounwind
define i32 @main() {
%cast210 = getelementptr [2 x i8],[2 x i8]* @.str, i64 0, i64 0
call i32 @puts(i8* %cast210)
ret i32 0
}

View File

@ -1,20 +1,13 @@
#!/bin/bash
check_errors () {
grep -q 'error [A-Z]\+[0-9]\+:' check.txt && cat check.txt 1>&2 || cat check.txt
rm check.txt
}
case "${PISTON_LANGUAGE}" in
csharp)
rename 's/$/\.cs/' "$@" # Add .cs extension
csc -out:out *.cs > check.txt
check_errors
csc -out:out *.cs
;;
basic)
rename 's/$/\.vb/' "$@" # Add .vb extension
vbnc -out:out *.vb > check.txt
check_errors
vbnc -out:out *.vb
;;
*)
echo "How did you get here? (${PISTON_LANGUAGE})"

View File

@ -8,7 +8,13 @@
},
{
"language": "basic",
"aliases": ["vb", "mono-vb", "mono-basic", "visual-basic", "visual basic"]
"aliases": [
"vb",
"mono-vb",
"mono-basic",
"visual-basic",
"visual basic"
]
}
]
}

10
packages/racket/8.3.0/build.sh vendored Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# curl racket 8.3 linux installation shell file
curl -L 'https://download.racket-lang.org/installers/8.3/racket-8.3-x86_64-linux-cs.sh' -o racket.sh
# provide settings "no" "4" and "<CR>" to racket.sh
echo "no
4
" | sh racket.sh

5
packages/racket/8.3.0/environment vendored Normal file
View File

@ -0,0 +1,5 @@
#!/bin/bash
# Path to racket binary
export PATH=$PWD/bin:$PATH
export RACKET_PATH=$PWD/racket

5
packages/racket/8.3.0/metadata.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"language": "racket",
"version": "8.3.0",
"aliases": ["rkt"]
}

3
packages/racket/8.3.0/run vendored Normal file
View File

@ -0,0 +1,3 @@
#!/bin/bash
"$RACKET_PATH"/bin/racket "$@"

3
packages/racket/8.3.0/test.rkt vendored Normal file
View File

@ -0,0 +1,3 @@
#lang racket
(display "OK")

22
packages/retina/1.2.0/build.sh vendored Normal file
View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
# get dotnet 2.2.8 as a dependency for retina
curl "https://download.visualstudio.microsoft.com/download/pr/022d9abf-35f0-4fd5-8d1c-86056df76e89/477f1ebb70f314054129a9f51e9ec8ec/dotnet-sdk-2.2.207-linux-x64.tar.gz" -Lo dotnet.tar.gz
tar xzf dotnet.tar.gz --strip-components=1
rm dotnet.tar.gz
export DOTNET_CLI_HOME=$PWD
./dotnet new console -o cache_application
rm -rf cache_application
# curl retina version 1.2.0
curl -L "https://github.com/m-ender/retina/releases/download/v1.2.0/retina-linux-x64.tar.gz" -o retina.tar.xz
tar xf retina.tar.xz --strip-components=1
rm retina.tar.xz
# move the libhostfxr.so file to the current directory so we don't have to set DOTNET_ROOT
mv host/fxr/2.2.8/libhostfxr.so libhostfxr.so
# give execute permissions to retina
chmod +x Retina

4
packages/retina/1.2.0/environment vendored Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
export PATH=$PWD/bin:$PATH
export RETINA_PATH=$PWD

5
packages/retina/1.2.0/metadata.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"language": "retina",
"version": "1.2.0",
"aliases": ["ret"]
}

Some files were not shown because too many files have changed in this diff Show More