Raycasting and Mouse Events

Bridging the gap between screen clicks and 3D objects

The problem with clicking in 3D

In a normal web page, clicking a button is straightforward. The browser knows where the button is and fires a click event. In Three.js, you're rendering a 3D scene onto a flat 2D canvas. The browser has no idea that a cube or a character exists inside that canvas. As far as the DOM is concerned, it's just pixels. So how do you know which 3D object the user clicked on?

The answer is raycasting. You shoot an invisible ray from the camera through the point on the screen where the user clicked, and check which objects in the scene that ray intersects. It's the fundamental technique for all mouse-based interaction in Three.js.

How raycasting works

A raycast works in three steps:

  1. Convert the mouse position from screen pixels to normalized device coordinates (NDC), a coordinate system where the center of the canvas is (0, 0), the top-right is (1, 1), and the bottom-left is (-1, -1).
  2. Use Raycaster.setFromCamera() to create a ray that starts at the camera and passes through that NDC point.
  3. Call Raycaster.intersectObjects() to test which objects the ray hits. The results come back sorted by distance, closest first.

Setting up the raycaster

Three.js gives you a Raycaster class that handles the math. You pair it with a Vector2 to track the mouse position in NDC space.

typescript
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();

The raycaster and mouse vector are created once and reused. There's no need to instantiate new ones on every event.

Converting mouse coordinates

Browser mouse events give you pixel coordinates relative to the page. You need to convert those into the -1 to +1 range that the raycaster expects. If your canvas fills the full window, the conversion looks like this:

typescript
function onMouseMove(event) {
  // Convert pixel coordinates to NDC (-1 to +1)
  mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
  mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
}

window.addEventListener('mousemove', onMouseMove);

Notice that the Y axis is flipped. In the browser, Y increases downward. In NDC, Y increases upward. That's why there's a negation on the Y calculation.

If your canvas doesn't fill the entire window, you'll need to account for the canvas's position and size using getBoundingClientRect():

typescript
function onMouseMove(event) {
  const rect = renderer.domElement.getBoundingClientRect();
  mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
  mouse.y = -((event.clientY - rect.top) / rect.height) * 2 + 1;
}

renderer.domElement.addEventListener('mousemove', onMouseMove);

This version is more robust and works regardless of where the canvas sits on the page. It's the one you should default to in most projects.

Detecting intersections

With the mouse position tracked, you cast the ray and check for hits. This typically happens inside your animation loop or in response to a click event. Try clicking on the cubes below to see it in action:

typescript
function onClick(event) {
  const rect = renderer.domElement.getBoundingClientRect();
  mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
  mouse.y = -((event.clientY - rect.top) / rect.height) * 2 + 1;

  raycaster.setFromCamera(mouse, camera);

  const intersects = raycaster.intersectObjects(scene.children);

  if (intersects.length > 0) {
    const firstHit = intersects[0];
    console.log('Hit:', firstHit.object.name);
    console.log('At point:', firstHit.point);
    console.log('Distance:', firstHit.distance);
  }
}

renderer.domElement.addEventListener('click', onClick);

The intersects array contains objects with useful information:

  • object: The Object3D that was hit
  • point: The exact Vector3 world position where the ray hit the surface
  • distance: How far the hit point is from the camera
  • face: The triangle face that was intersected
  • faceIndex: The index of that face in the geometry

Recursive intersection testing

By default, intersectObjects only tests the objects you pass in directly. If your scene uses groups or loaded models (which are deeply nested), you need to pass true as the second argument to test children recursively:

typescript
// Only tests direct children of the scene
raycaster.intersectObjects(scene.children);

// Tests all descendants, including nested meshes inside groups
raycaster.intersectObjects(scene.children, true);

If you're working with loaded glTF models, you almost always want recursive mode. The meshes inside a loaded model are buried several levels deep in the scene graph.

Hover effects

A common pattern is highlighting objects when the user hovers over them. You do this by raycasting on every frame (or on mousemove) and tracking which object is currently under the cursor. Move your mouse over the shapes below:

typescript
let hoveredObject = null;

function onMouseMove(event) {
  const rect = renderer.domElement.getBoundingClientRect();
  mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
  mouse.y = -((event.clientY - rect.top) / rect.height) * 2 + 1;

  raycaster.setFromCamera(mouse, camera);
  const intersects = raycaster.intersectObjects(scene.children, true);

  // Reset the previous hover
  if (hoveredObject) {
    hoveredObject.material.emissive.setHex(0x000000);
    hoveredObject = null;
    document.body.style.cursor = 'default';
  }

  // Apply hover to the first hit
  if (intersects.length > 0) {
    hoveredObject = intersects[0].object;
    hoveredObject.material.emissive.setHex(0x333333);
    document.body.style.cursor = 'pointer';
  }
}

The emissive property adds a glow-like tint to the material without replacing its base color. Setting it to a dark gray on hover gives a subtle but clear visual feedback. You can also swap materials entirely, change opacity, adjust scale, or do whatever makes sense for your project.

Limiting what gets tested

Raycasting against every object in the scene is wasteful if you only care about a few interactive objects. Instead, keep a separate array of clickable objects and test only against those. In the scene below, only the colored shapes are interactive. The gray background cubes are ignored by the raycaster entirely. Try hovering and clicking to see the difference:

typescript
const clickableObjects = [];

// When creating interactive objects, add them to the array
const cube = new THREE.Mesh(geometry, material);
cube.name = 'interactive-cube';
scene.add(cube);
clickableObjects.push(cube);

// Raycast only against clickable objects
const intersects = raycaster.intersectObjects(clickableObjects, true);

This is both a performance optimization and a way to keep your interaction logic clean. Background objects, ground planes, and decorative meshes won't interfere with your click handling.

Changing the cursor

A small but important detail is changing the cursor to indicate that something is interactive. The canvas element uses the default cursor by default, which gives users no hint that they can click on things. Updating document.body.style.cursor (or the canvas element's cursor style) based on hover state makes a big difference for usability.

Performance considerations

Raycasting is not free. It tests the ray against every triangle in every object you pass to intersectObjects. For simple scenes with a few dozen objects, this is fast enough that you won't notice. For complex scenes with high-poly models, it can become a bottleneck if you're doing it every frame.

A few strategies to keep it fast:

  • Limit the objects array to only interactive objects, as shown above
  • Use simpler collision meshes (invisible low-poly versions) for raycast testing instead of the detailed visible meshes
  • Throttle mousemove raycasts so they don't fire on every pixel of movement
  • Set raycaster.far to limit how far the ray travels, skipping distant objects
typescript
// Only test objects within 100 units of the camera
raycaster.far = 100;

// Use layers to control what the raycaster can see
raycaster.layers.set(1);
interactiveMesh.layers.enable(1);

The layers system is a useful alternative to maintaining your own arrays. Objects and raycasters each have a layer mask, and the raycaster only tests objects that share at least one enabled layer. This lets you tag objects as interactive without managing separate arrays.

←   Traversing loaded modelsInteraction Managers and Libraries   →