batching during inferencing (original) (raw)

Skip to content

Sign in

Appearance settings

View all features

View all solutions

Provide feedback

We read every piece of feedback, and take your input very seriously.

Include my email address so I can be contacted

Saved searches

Use saved searches to filter your results more quickly

Sign in

Sign up

Appearance settings

This repository was archived by the owner on Jun 19, 2025. It is now read-only.

mozilla / DeepSpeech Public archive

Additional navigation options

This repository was archived by the owner on Jun 19, 2025. It is now read-only.

Open

Open

batching during inferencing

#838

@abuvaneswari

Description

@abuvaneswari

abuvaneswari

opened

on Sep 18, 2017

Hello,
Does native_client support inferencing of > 1 audio file at the same time? I am looking to use my GPU for inferencing and optimize the utilization by batching the requests from multiple audio files.

Metadata

Metadata

Assignees

No one assigned

Labels

No labels

No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions