Documentation
Resource Management

Resource Management

In the context of developing long-lived applications, resource management plays a critical role. Effective resource management is indispensable when building large-scale applications. It's imperative that our application is resource-efficient and does not result in resource leaks.

Resource leaks, such as unclosed socket connections, database connections, or file descriptors, are unacceptable in web applications. Effect offers robust constructs to address this concern effectively.

To create an application that manages resources safely, we must ensure that every time we open a resource, we have a mechanism in place to close it. This applies whether we fully utilize the resource or encounter exceptions during its use.

In the following sections, we'll delve deeper into how Effect simplifies resource management and ensures resource safety in your applications.

Scope

The Scope data type is fundamental for managing resources safely and in a composable manner in Effect.

In simple terms, a scope represents the lifetime of one or more resources. When a scope is closed, the resources associated with it are guaranteed to be released.

With the Scope data type, you can:

  • Add finalizers, which represent the release of a resource.
  • Close the scope, releasing all acquired resources and executing any added finalizers.

To grasp the concept better, let's delve into an example that demonstrates how it works. It's worth noting that in typical Effect usage, you won't often find yourself working with these low-level APIs for managing scopes.

import { Scope, Effect, Console, Exit } from "effect"
 
const program =
  // create a new scope
  Scope.make().pipe(
    // add finalizer 1
    Effect.tap((scope) =>
      Scope.addFinalizer(scope, Console.log("finalizer 1"))
    ),
    // add finalizer 2
    Effect.tap((scope) =>
      Scope.addFinalizer(scope, Console.log("finalizer 2"))
    ),
    // close the scope
    Effect.flatMap((scope) =>
      Scope.close(scope, Exit.succeed("scope closed successfully"))
    )
  )
 
Effect.runPromise(program)
/*
Output:
finalizer 2 <-- finalizers are closed in reverse order
finalizer 1
*/

By default, when a Scope is closed, all finalizers added to that Scope are executed in the reverse order in which they were added. This approach makes sense because releasing resources in the reverse order of acquisition ensures that resources are properly closed.

For instance, if you open a network connection and then access a file on a remote server, you must close the file before closing the network connection. This sequence is critical to maintaining the ability to interact with the remote server.

By combining Scope with the Effect context, we gain a powerful way to manage resources effectively.

addFinalizer

Now, let's dive into the Effect.addFinalizer function, which provides a higher-level API for adding finalizers to the scope of an Effect value. These finalizers are guaranteed to execute when the associated scope is closed, and their behavior may depend on the Exit value with which the scope is closed.

Let's explore some examples to understand this better.

Let's observe how things behave in the event of success:

import { Effect, Console } from "effect"
 
// $ExpectType Effect<Scope, never, number>
const program = Effect.gen(function* (_) {
  yield* _(
    Effect.addFinalizer((exit) => Console.log(`finalizer after ${exit._tag}`))
  )
  return 1
})
 
// $ExpectType Effect<never, never, number>
const runnable = Effect.scoped(program)
 
Effect.runPromise(runnable).then(console.log, console.error)
/*
Output:
finalizer after Success
1
*/

Here, the Effect.addFinalizer operator adds a Scope to the context required by the workflow, as indicated by:

Effect<Scope, never, void>

This signifies that the workflow needs a Scope to execute. You can provide this Scope using the Effect.scoped operator. It creates a new Scope, supplies it to the workflow, and closes the Scope once the workflow is complete.

The Effect.scoped operator removes the Scope from the context, indicating that the workflow no longer requires any resources associated with a scope.

Next, let's explore how things behave in the event of a failure:

import { Effect, Console } from "effect"
 
// $ExpectType Effect<Scope, string, never>
const program = Effect.gen(function* (_) {
  yield* _(
    Effect.addFinalizer((exit) => Console.log(`finalizer after ${exit._tag}`))
  )
  return yield* _(Effect.fail("Uh oh!"))
})
 
// $ExpectType Effect<never, string, never>
const runnable = Effect.scoped(program)
 
Effect.runPromise(runnable).then(console.log, console.error)
/*
Output:
finalizer after Failure
{
  _id: "FiberFailure",
  cause: {
    _id: "Cause",
    _tag: "Fail",
    failure: "Uh oh!"
  }
}*/

Finally, let's explore the behavior in the event of an interruption:

import { Effect, Console } from "effect"
 
// $ExpectType Effect<Scope, never, never>
const program = Effect.gen(function* (_) {
  yield* _(
    Effect.addFinalizer((exit) => Console.log(`finalizer after ${exit._tag}`))
  )
  return yield* _(Effect.interrupt)
})
 
// $ExpectType Effect<never, never, never>
const runnable = Effect.scoped(program)
 
Effect.runPromise(runnable).then(console.log, console.error)
/*
Output:
finalizer after Failure
{
  _id: "FiberFailure",
  cause: {
    _id: "Cause",
    _tag: "Interrupt",
    fiberId: {
      _id: "FiberId",
      _tag: "Runtime",
      id: 0,
      startTimeMillis: ...
    }
  }
}
*/

Manually Create and Close Scopes

When you're working with multiple scoped resources within a single operation, it's important to understand how their scopes interact. By default, these scopes are merged into one, but you can have more fine-grained control over when each scope is closed by manually creating and closing them.

Let's start by looking at how scopes are merged by default. Take a look at this code example:

import { Effect, Console } from "effect"
 
// $ExpectType Effect<Scope, never, void>
const task1 = Effect.gen(function* (_) {
  console.log("task 1")
  yield* _(Effect.addFinalizer(() => Console.log("finalizer after task 1")))
})
 
// $ExpectType Effect<Scope, never, void>
const task2 = Effect.gen(function* (_) {
  console.log("task 2")
  yield* _(Effect.addFinalizer(() => Console.log("finalizer after task 2")))
})
 
// $ExpectType Effect<Scope, never, void>
const program = Effect.gen(function* (_) {
  // Both of these scopes are merged into one
  yield* _(task1)
  yield* _(task2)
})
 
Effect.runPromise(program.pipe(Effect.scoped))
/*
Output:
task 1
task 2
finalizer after task 2
finalizer after task 1
*/

In this case, the scopes of task1 and task2 are merged into a single scope, and when the program is run, it outputs the tasks and their finalizers in a specific order.

If you want more control over when each scope is closed, you can manually create and close them, as shown in this example:

import { Console, Effect, Exit, Scope } from "effect"
 
const task1 = Effect.gen(function* (_) {
  console.log("task 1")
  yield* _(Effect.addFinalizer(() => Console.log("finalizer after task 1")))
})
 
const task2 = Effect.gen(function* (_) {
  console.log("task 2")
  yield* _(Effect.addFinalizer(() => Console.log("finalizer after task 2")))
})
 
const program = Effect.gen(function* (_) {
  const scope1 = yield* _(Scope.make())
  const scope2 = yield* _(Scope.make())
 
  // Extend the scope of task1 into scope1
  yield* _(task1, Scope.extend(scope1))
 
  // Extend the scope of task2 into scope2
  yield* _(task2, Scope.extend(scope2))
 
  // Close scope1 and scope2 manually
  yield* _(Scope.close(scope1, Exit.unit))
  yield* _(Console.log("doing something else"))
  yield* _(Scope.close(scope2, Exit.unit))
})
 
Effect.runPromise(program.pipe(Effect.scoped))
/*
Output:
task 1
task 2
finalizer after task 1
doing something else
finalizer after task 2
*/

In this example, we create two separate scopes, scope1 and scope2, and extend the scope of each task into its respective scope. When you run the program, it outputs the tasks and their finalizers in a different order.

The Scope.extend function allows you to extend the scope of an Effect workflow that requires a scope into another scope without closing the scope when the workflow finishes executing. This allows you to extend a scoped value into a larger scope.

You might wonder what happens when a scope is closed, but a task within that scope hasn't completed yet. The key point to note is that the scope closing doesn't force the task to be interrupted. It will continue running, and the finalizer will execute immediately when registered. The call to close will only wait for the finalizers that have already been registered.

So, finalizers run when the scope is closed, not necessarily when the effect finishes running. When you're using Effect.scoped, the scope is managed automatically, and finalizers are executed accordingly. However, when you manage the scope manually, you have control over when finalizers are executed.

Defining Resources

We can define a resource using operators like Effect.acquireRelease(acquire, release), which allows us to create a scoped value from an acquire and release workflow.

Every acquire release requires three actions:

  • Acquiring Resource. An effect describing the acquisition of resource. For example, opening a file.
  • Using Resource. An effect describing the actual process to produce a result. For example, counting the number of lines in a file.
  • Releasing Resource. An effect describing the final step of releasing or cleaning up the resource. For example, closing a file.

The acquireRelease operator performs the acquire workflow uninterruptibly. This is important because if we allowed interruption during resource acquisition we could be interrupted when the resource was partially acquired.

The guarantee of the acquireRelease operator is that if the acquire workflow successfully completes execution then the release workflow is guaranteed to be run when the Scope is closed.

For example, let's define a simple resource:

resource.ts
import { Effect } from "effect"
 
// Define the interface for the resource
export interface MyResource {
  readonly contents: string
  readonly close: () => Promise<void>
}
 
// Simulate getting the resource
const getMyResource = (): Promise<MyResource> =>
  Promise.resolve({
    contents: "lorem ipsum",
    close: () =>
      new Promise((resolve) => {
        console.log("Resource released")
        resolve()
      })
  })
 
// Define the acquisition of the resource with error handling
export const acquire = Effect.tryPromise({
  try: () =>
    getMyResource().then((res) => {
      console.log("Resource acquired")
      return res
    }),
  catch: () => new Error("getMyResourceError")
})
 
// Define the release of the resource
export const release = (res: MyResource) => Effect.promise(() => res.close())
 
// $ExpectType Effect<Scope, Error, MyResource>
export const resource = Effect.acquireRelease(acquire, release)

Notice that the acquireRelease operator added a Scope to the context required by the workflow:

Effect<Scope, Error, MyResource>

This indicates that the workflow needs a Scope to run and adds a finalizer that will close the resource when the scope is closed.

We can continue working with the resource for as long as we want by using flatMap or other Effect operators. For example, here's how we can read the contents:

import { Effect } from "effect"
import { resource } from "./resource"
 
// $ExpectType Effect<Scope, Error, void>
const program = Effect.gen(function* (_) {
  const res = yield* _(resource)
  console.log(`content is ${res.contents}`)
})

Once we are done working with the resource, we can close the scope using the Effect.scoped operator. It creates a new Scope, provides it to the workflow, and closes the Scope when the workflow is finished.

import { Effect } from "effect"
import { resource } from "./resource"
 
// $ExpectType Effect<never, Error, void>
const program = Effect.scoped(
  Effect.gen(function* (_) {
    const res = yield* _(resource)
    console.log(`content is ${res.contents}`)
  })
)

The Effect.scoped operator removes the Scope from the context, indicating that there are no longer any resources used by this workflow which require a scope.

We now have a workflow that is ready to run:

Effect.runPromise(program)
/*
Resource acquired
content is lorem ipsum
Resource released
*/

acquireUseRelease

The Effect.acquireUseRelease(acquire, use, release) function is a specialized version of the acquireRelease function that simplifies resource management by automatically handling the scoping of resources.

The main difference is that acquireUseRelease eliminates the need to manually call Effect.scoped to manage the resource's scope. It has additional knowledge about when you are done using the resource created with the acquire step. This is achieved by providing a use argument, which represents the function that operates on the acquired resource. As a result, acquireUseRelease can automatically determine when it should execute the release step.

Here's an example that demonstrates the usage of acquireUseRelease:

import { Effect, Console } from "effect"
import { MyResource, acquire, release } from "./resource"
 
const use = (res: MyResource) => Console.log(`content is ${res.contents}`)
 
// $ExpectType Effect<never, Error, void>
const program = Effect.acquireUseRelease(acquire, use, release)
 
Effect.runPromise(program)
/*
Output:
Resource acquired
content is lorem ipsum
Resource released
*/

Pattern: Sequence of Operations with Compensating Actions on Failure

In certain scenarios, you might need to perform a sequence of chained operations where the success of each operation depends on the previous one. However, if any of the operations fail, you would want to reverse the effects of all previous successful operations. This pattern is valuable when you need to ensure that either all operations succeed, or none of them have any effect at all.

Effect offers a way to achieve this pattern using the acquireRelease function in combination with the Exit type. The acquireRelease function allows you to acquire a resource, perform operations with it, and release the resource when you're done. The Exit type represents the outcome of an effectful computation, indicating whether it succeeded or failed.

Let's go through an example of implementing this pattern. Suppose we want to create a "Workspace" in our application, which involves creating an S3 bucket, an ElasticSearch index, and a Database entry that relies on the previous two.

To begin, we define the domain model for the required services: S3, ElasticSearch, and Database.

Services.ts
import { Effect, Context } from "effect"
 
export class S3Error {
  readonly _tag = "S3Error"
}
 
export interface Bucket {
  readonly name: string
}
 
export interface S3 {
  createBucket: Effect.Effect<never, S3Error, Bucket>
  deleteBucket: (bucket: Bucket) => Effect.Effect<never, never, void>
}
 
export const S3 = Context.Tag<S3>()
 
export class ElasticSearchError {
  readonly _tag = "ElasticSearchError"
}
 
export interface Index {
  readonly id: string
}
 
export interface ElasticSearch {
  createIndex: Effect.Effect<never, ElasticSearchError, Index>
  deleteIndex: (index: Index) => Effect.Effect<never, never, void>
}
 
export const ElasticSearch = Context.Tag<ElasticSearch>()
 
export class DatabaseError {
  readonly _tag = "DatabaseError"
}
 
export interface Entry {
  readonly id: string
}
 
export interface Database {
  createEntry: (
    bucket: Bucket,
    index: Index
  ) => Effect.Effect<never, DatabaseError, Entry>
  deleteEntry: (entry: Entry) => Effect.Effect<never, never, void>
}
 
export const Database = Context.Tag<Database>()

Next, we define the three create actions and the overall transaction (make) for the Workspace.

Workspace.ts
import { Effect, Exit } from "effect"
import * as Services from "./Services"
 
// Create a bucket, and define the release function that deletes the bucket if the operation fails.
const createBucket = Effect.gen(function* (_) {
  const { createBucket, deleteBucket } = yield* _(Services.S3)
  return yield* _(
    Effect.acquireRelease(createBucket, (bucket, exit) =>
      // The release function for the Effect.acquireRelease operation is responsible for handling the acquired resource (bucket) after the main effect has completed.
      // It is called regardless of whether the main effect succeeded or failed.
      // If the main effect failed, Exit.isFailure(exit) will be true, and the function will perform a rollback by calling deleteBucket(bucket).
      // If the main effect succeeded, Exit.isFailure(exit) will be false, and the function will return Effect.unit, representing a successful, but do-nothing effect.
      Exit.isFailure(exit) ? deleteBucket(bucket) : Effect.unit
    )
  )
})
 
// Create an index, and define the release function that deletes the index if the operation fails.
const createIndex = Effect.gen(function* (_) {
  const { createIndex, deleteIndex } = yield* _(Services.ElasticSearch)
  return yield* _(
    Effect.acquireRelease(createIndex, (index, exit) =>
      Exit.isFailure(exit) ? deleteIndex(index) : Effect.unit
    )
  )
})
 
// Create an entry in the database, and define the release function that deletes the entry if the operation fails.
const createEntry = (bucket: Services.Bucket, index: Services.Index) =>
  Effect.gen(function* (_) {
    const { createEntry, deleteEntry } = yield* _(Services.Database)
    return yield* _(
      Effect.acquireRelease(createEntry(bucket, index), (entry, exit) =>
        Exit.isFailure(exit) ? deleteEntry(entry) : Effect.unit
      )
    )
  })
 
// $ExpectType Effect<S3 | ElasticSearch | Database, S3Error | ElasticSearchError | DatabaseError, Entry>
export const make = Effect.scoped(
  Effect.gen(function* (_) {
    const bucket = yield* _(createBucket)
    const index = yield* _(createIndex)
    return yield* _(createEntry(bucket, index))
  })
)

We then create simple service implementations to test the behavior of our Workspace code. To achieve this, we will utilize layers to construct test services. These layers will be able to handle various scenarios, including errors, which we can control using the FailureCase type.

WorkspaceTest.ts
import { Effect, Context, Layer, Console } from "effect"
import * as Services from "./Services"
import * as Workspace from "./Workspace"
 
// The `FailureCase` type allows us to provide different error scenarios while testing our services.
// For example, by providing the value "S3", we can simulate an error scenario specific to the S3 service.
// This helps us ensure that our program handle errors correctly and behave as expected in various situations.
// Similarly, we can provide other values like "ElasticSearch" or "Database" to simulate error scenarios for those services.
// In cases where we want to test the absence of errors, we can provide `undefined`.
// By using this parameter, we can thoroughly test our services and verify their behavior under different error conditions.
type FailureCase = "S3" | "ElasticSearch" | "Database" | undefined
 
const FailureCase = Context.Tag<FailureCase>()
 
// Create a test layer for the S3 service
 
// $ExpectType Layer<FailureCase, never, S3>
const S3Test = Layer.effect(
  Services.S3,
  Effect.gen(function* (_) {
    const failureCase = yield* _(FailureCase)
    return Services.S3.of({
      createBucket: Effect.gen(function* (_) {
        console.log("[S3] creating bucket")
        if (failureCase === "S3") {
          return yield* _(Effect.fail(new Services.S3Error()))
        } else {
          return { name: "<bucket.name>" }
        }
      }),
      deleteBucket: (bucket) => Console.log(`[S3] delete bucket ${bucket.name}`)
    })
  })
)
 
// Create a test layer for the ElasticSearch service
 
// $ExpectType Layer<FailureCase, never, ElasticSearch>
const ElasticSearchTest = Layer.effect(
  Services.ElasticSearch,
  Effect.gen(function* (_) {
    const failureCase = yield* _(FailureCase)
    return Services.ElasticSearch.of({
      createIndex: Effect.gen(function* (_) {
        console.log("[ElasticSearch] creating index")
        if (failureCase === "ElasticSearch") {
          return yield* _(Effect.fail(new Services.ElasticSearchError()))
        } else {
          return { id: "<index.id>" }
        }
      }),
      deleteIndex: (index) =>
        Console.log(`[ElasticSearch] delete index ${index.id}`)
    })
  })
)
 
// Create a test layer for the Database service
 
// $ExpectType Layer<FailureCase, never, Database>
const DatabaseTest = Layer.effect(
  Services.Database,
  Effect.gen(function* (_) {
    const failureCase = yield* _(FailureCase)
    return Services.Database.of({
      createEntry: (bucket, index) =>
        Effect.gen(function* (_) {
          console.log(
            `[Database] creating entry for bucket ${bucket.name} and index ${index.id}`
          )
          if (failureCase === "Database") {
            return yield* _(Effect.fail(new Services.DatabaseError()))
          } else {
            return { id: "<entry.id>" }
          }
        }),
      deleteEntry: (entry) => Console.log(`[Database] delete entry ${entry.id}`)
    })
  })
)
 
// Merge all the test layers for S3, ElasticSearch, and Database services into a single layer
 
// $ExpectType Layer<FailureCase, never, S3 | ElasticSearch | Database>
const layer = Layer.mergeAll(S3Test, ElasticSearchTest, DatabaseTest)
 
// Create a runnable effect to test the Workspace code
// The effect is provided with the test layer and a FailureCase service with undefined value (no failure case)
 
// $ExpectType Effect<never, S3Error | ElasticSearchError | DatabaseError, Entry>
const runnable = Workspace.make.pipe(
  Effect.provide(layer),
  Effect.provideService(FailureCase, undefined)
)
 
Effect.runPromise(Effect.either(runnable)).then(console.log)

Let's examine the test results for the scenario where FailureCase is set to undefined (happy path):

Terminal
[S3] creating bucket
[ElasticSearch] creating index
[Database] creating entry for bucket <bucket.name> and index <index.id>
{
  _id: "Either",
  _tag: "Right",
  right: {
    id: "<entry.id>"
  }
}

In this case, all operations succeed, and we see a successful result with right({ id: '<entry.id>' }).

Now, let's simulate a failure in the Database:

const runnable = Workspace.make.pipe(
  Effect.provide(layer),
  Effect.provideService(Failure, "Database")
)

The console output will be:

Terminal
[S3] creating bucket
[ElasticSearch] creating index
[Database] creating entry for bucket <bucket.name> and index <index.id>
[ElasticSearch] delete index <index.id>
[S3] delete bucket <bucket.name>
{
  _id: "Either",
  _tag: "Left",
  left: {
    _tag: "DatabaseError"
  }
}

You can observe that once the Database error occurs, there is a complete rollback that deletes the ElasticSearch index first and then the associated S3 bucket. The result is a failure with left(new DatabaseError()).

Let's now make the index creation fail instead:

const runnable = Workspace.make.pipe(
  Effect.provide(layer),
  Effect.provideService(Failure, "ElasticSearch")
)

In this case, the console output will be:

Terminal
[S3] creating bucket
[ElasticSearch] creating index
[S3] delete bucket <bucket.name>
{
  _id: "Either",
  _tag: "Left",
  left: {
    _tag: "ElasticSearchError"
  }
}

As expected, once the ElasticSearch index creation fails, there is a rollback that deletes the S3 bucket. The result is a failure with left(new ElasticSearchError()).