Rock can be configured through a configuration object that defines various aspects of your project setup.
The most basic configuration would, assuming you only support iOS platform and choose Metro as our bundler, would look like this:
It's intentional design decision to explicitly define platforms, bundlers etc, so you can e.g. add more platforms, or replace a bundler with a different one.
A plugin is a partially applied function that has access to api
object of PluginApi
type:
The following configuration options accept plugins: plugins
, platforms
, bundler
.
A plugin that registers my-command
command outputing a hello world would look like this:
Bundler is a plugin that registers commands for running a dev server and bundling final JavaScript or Hermes bytecode.
By default, Rock ships with two bundler: Metro (@rock-js/plugin-metro
) and Re.Pack (@rock-js/plugin-repack
).
You can configure the bundler like this:
Platform is a plugin that registers platform-specific functionality such as commands to build the project and run it on a device or simulator.
By default, Rock ships with two platforms: iOS (@rock-js/platform-ios
) and Android (@rock-js/platform-android
).
You can configure the platform like this:
One of the key features of Rock is remote build caching to speed up your development workflow. By remote cache we mean native build artifacts (e.g. APK, or IPA binaries), which are discoverable by the user and available for download. Remote cache can live on any static storage provider, such as S3, R2, or GitHub Artifacts. For Rock to know how and where to access this cache, you'll need to define remoteCacheProvider
, which can be either bundled with the framework (such as the one for GitHub Actions) or a custom one that you can provide.
When remoteCacheProvider
is set, the CLI will:
.rock/
directory for builds downloaded from a remote cache.Available providers you can use:
In case you would like to store native build artifacts in a different kind of remote storage, you can implement your own custom provider.
Regardless of remote cache provider set, to download native build artifats from a remote storage, you'll need to upload them first, ideally in a continuous manner. That's why the best place to put the upload logic would be your Continuous Integration server.
Rock provides out-of-the-box GitHub Actions for:
callstackincubator/ios
: action for iOS compatible with @rock-js/provider-github
callstackincubator/android
: action for Android compatible with @rock-js/provider-github
For other CI providers you'll need to manage artifacts yourself. We recommend mimicking the GitHub Actions setup on your CI server.
If you store your code on GitHub, one of the easiest way to setup remote cache is through @rock-js/provider-github
and our GitHub Actions, which will manage building, uploading and downloading your native artifacts for iOS and Android.
You can configure it as follows:
If you prefer to store native build artifacts on AWS S3 or Cloudflare R2, you can use @rock-js/provider-s3
. You can configure it as follows.
Or when using env variables (since AWS S3 supports reading these when available in process.env
):
Option | Type | Required | Description |
---|---|---|---|
endpoint | string | No | Optional endpoint, necessary for self-hosted S3 servers or Cloudflare R2 integration |
bucket | string | Yes | The bucket name to use for the S3 server |
region | string | Yes | The region of the S3 server |
accessKeyId | string | No | The access key ID for the S3. Not required when using IAM roles or other auth methods server |
secretAccessKey | string | No | The secret access key for the S3. Not required when using IAM roles or other auth methods server |
profile | string | No | AWS profile name to use for authentication. Useful for local development. |
roleArn | string | No | Role ARN to assume for authentication. Useful for cross-account access. |
roleSessionName | string | No | Session name when assuming a role. |
externalId | string | No | External ID when assuming a role (for additional security). |
directory | string | No | The directory to store artifacts in the S3 server (defaults to rock-artifacts ) |
name | string | No | The display name of the provider (defaults to S3 ) |
linkExpirationTime | number | No | The time in seconds for presigned URLs to expire (defaults to 24 hours) |
The S3 provider supports multiple authentication methods through the underlying AWS SDK:
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and optionally AWS_SESSION_TOKEN
for temporary credentials~/.aws/credentials
with the profile
optionroleArn
to assume a different role, optionally with profile
as source credentialsAWS_SESSION_TOKEN
environment variable for temporary credentialsThanks to R2 interface being compatible with S3, you can store and retrieve your native build artifacts from Cloudflare R2 storage using S3 provider. Set the endpoint
option to point to your account storage.
You can plug in any remote storage by implementing the RemoteBuildCache
interface. This section explains how to implement each method and handle the complexity that Rock manages for you.
Your provider must implement:
Return a list of artifacts with at least name
and a downloadable url
. Optionally add an id
.
The artifacts are uploaded as ZIP archives (excluding ad-hoc scenario), so make sure to append the .zip
suffix to the artifactName
.
Example (S3-style): prefix-filter objects and convert each to { name, url }
. Signed URLs are fine.
Return a Web Response
whose body
is a readable stream of the artifact and (if available) a content-length
header. Rock uses this to report download progress.
The artifacts are uploaded as ZIP archives (excluding ad-hoc scenario), so make sure to append the .zip
suffix to the artifactName
.
If your SDK returns a Node stream, convert it to a Web stream and wrap in Response
:
Delete the requested artifact(s) and return the list of deleted entries: { name, url, id? }
.
The artifacts are uploaded as ZIP archives (excluding ad-hoc scenario), so make sure to append the .zip
suffix to the artifactName
.
Respect skipLatest
if your backend supports ordering/versioning, as it's used to clean up stale artifacts e.g. created in an open pull request. Otherwise you may simply delete the single matching object.
Rock expects upload()
to return metadata and a getResponse
function:
getResponse(buffer, contentType?) => Response
:
Buffer
(for normal builds), or(baseUrl) => Buffer
(for ad‑hoc pages) so you can inject absolute URLs into HTML/plist before uploadResponse
objectupload
will pass the uploadArtifactName
variable, so use that instead of artifactName
For progress signaling, you can:
httpUploadProgress
) to enqueue chunks proportional to actual bytes uploadedExample (S3-like) using real SDK progress:
Normal builds: Rock uploads a single build artifact (a ZIP archive). Your provider stores it at a path like <directory>/<artifactName>.zip
.
app.tar.gz
to preserve permissions and includes it in the artifact; you just receive the buffer via getResponse
. You don't need to create the tarball yourself.Ad-hoc distribution:
--ad-hoc
flag passed to remote-cache upload
Rock uploads:
<directory>/ad-hoc/<artifactName>/<AppName>.ipa
index.html
landing page (make sure it's accessible for testers)manifest.plist
This index.html
file will display an ad-hoc distribution web portal, allowing developers and testers to install apps on their provisioned devices by simply clicking "Install App".
Learn more about ad-hoc distribution and how it works with remote-cache upload --ad-hoc
command here.
Ad-hoc distribution web portal | Ad-hoc distribution web portal |
---|---|
![]() | ![]() |
upload()
with a link to docs (as GitHub provider does).url
s from list()
; signed URLs are OK.content-length
on both download and upload Response
objects so Rock can display progress.Response
to show progress, and your SDK promise resolves independently. In tests, mock your SDK's upload to resolve quickly.Example provider:
Then use it in your config:
If you only want to use the CLI without the remote cache, and skip the steps 1.
and 2.
and a warning that you're not using a remote provider, you can disable this functionality by setting it to null
:
A fingerprint is a representation of your native project in a form of a hash (e.g. 378083de0c6e6bb6caf8fb72df658b0b26fb29ef
). It's calculated every time the CLI is run. When a local fingerprint matches the one that's generated on a remote server, we have a match and can download the project for you instead of building it locally.
The fingerprint configuration helps determine when builds should be cached and invalidated in non-standard settings, e.g. when you have git submodules in your project:
The fingerprint calculation uses @expo/fingerprint
under the hood. This means that you can use advanced configuration through fingerprint.config.js
file: