5 Vinod Koul <vinod dot koul at intel.com>
7 .. note:: For DMA Engine usage in async_tx please see:
8 ``Documentation/crypto/async-tx-api.txt``
11 Below is a guide to device driver writers on how to use the Slave-DMA API of the
12 DMA Engine. This is applicable only for slave DMA usage only.
17 The slave DMA usage consists of following steps:
19 - Allocate a DMA slave channel
21 - Set slave and controller specific parameters
23 - Get a descriptor for transaction
25 - Submit the transaction
27 - Issue pending requests and wait for callback notification
29 The details of these operations are:
31 1. Allocate a DMA slave channel
33 Channel allocation is slightly different in the slave DMA context,
34 client drivers typically need a channel from a particular DMA
35 controller only and even in some cases a specific channel is desired.
36 To request a channel dma_request_chan() API is used.
42 struct dma_chan *dma_request_chan(struct device *dev, const char *name);
44 Which will find and return the ``name`` DMA channel associated with the 'dev'
45 device. The association is done via DT, ACPI or board file based
46 dma_slave_map matching table.
48 A channel allocated via this interface is exclusive to the caller,
49 until dma_release_channel() is called.
51 2. Set slave and controller specific parameters
53 Next step is always to pass some specific information to the DMA
54 driver. Most of the generic information which a slave DMA can use
55 is in struct dma_slave_config. This allows the clients to specify
56 DMA direction, DMA addresses, bus widths, DMA burst lengths etc
59 If some DMA controllers have more parameters to be sent then they
60 should try to embed struct dma_slave_config in their controller
61 specific structure. That gives flexibility to client to pass more
62 parameters, if required.
68 int dmaengine_slave_config(struct dma_chan *chan,
69 struct dma_slave_config *config)
71 Please see the dma_slave_config structure definition in dmaengine.h
72 for a detailed explanation of the struct members. Please note
73 that the 'direction' member will be going away as it duplicates the
74 direction given in the prepare call.
76 3. Get a descriptor for transaction
78 For slave usage the various modes of slave transfers supported by the
81 - slave_sg: DMA a list of scatter gather buffers from/to a peripheral
83 - dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the
84 operation is explicitly stopped.
86 - interleaved_dma: This is common to Slave as well as M2M clients. For slave
87 address of devices' fifo could be already known to the driver.
88 Various types of operations could be expressed by setting
89 appropriate values to the 'dma_interleaved_template' members.
91 A non-NULL return of this transfer API represents a "descriptor" for
92 the given transaction.
98 struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
99 struct dma_chan *chan, struct scatterlist *sgl,
100 unsigned int sg_len, enum dma_data_direction direction,
101 unsigned long flags);
103 struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
104 struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
105 size_t period_len, enum dma_data_direction direction);
107 struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
108 struct dma_chan *chan, struct dma_interleaved_template *xt,
109 unsigned long flags);
111 The peripheral driver is expected to have mapped the scatterlist for
112 the DMA operation prior to calling dmaengine_prep_slave_sg(), and must
113 keep the scatterlist mapped until the DMA operation has completed.
114 The scatterlist must be mapped using the DMA struct device.
115 If a mapping needs to be synchronized later, dma_sync_*_for_*() must be
116 called using the DMA struct device, too.
117 So, normal setup should look like this:
121 nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
125 desc = dmaengine_prep_slave_sg(chan, sgl, nr_sg, direction, flags);
127 Once a descriptor has been obtained, the callback information can be
128 added and the descriptor must then be submitted. Some DMA engine
129 drivers may hold a spinlock between a successful preparation and
130 submission so it is important that these two operations are closely
135 Although the async_tx API specifies that completion callback
136 routines cannot submit any new operations, this is not the
137 case for slave/cyclic DMA.
139 For slave DMA, the subsequent transaction may not be available
140 for submission prior to callback function being invoked, so
141 slave DMA callbacks are permitted to prepare and submit a new
144 For cyclic DMA, a callback function may wish to terminate the
145 DMA via dmaengine_terminate_async().
147 Therefore, it is important that DMA engine drivers drop any
148 locks before calling the callback function which may cause a
151 Note that callbacks will always be invoked from the DMA
152 engines tasklet, never from interrupt context.
154 4. Submit the transaction
156 Once the descriptor has been prepared and the callback information
157 added, it must be placed on the DMA engine drivers pending queue.
163 dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
165 This returns a cookie can be used to check the progress of DMA engine
166 activity via other DMA engine calls not covered in this document.
168 dmaengine_submit() will not start the DMA operation, it merely adds
169 it to the pending queue. For this, see step 5, dma_async_issue_pending.
173 After calling ``dmaengine_submit()`` the submitted transfer descriptor
174 (``struct dma_async_tx_descriptor``) belongs to the DMA engine.
175 Consequently, the client must consider invalid the pointer to that
178 5. Issue pending DMA requests and wait for callback notification
180 The transactions in the pending queue can be activated by calling the
181 issue_pending API. If channel is idle then the first transaction in
182 queue is started and subsequent ones queued up.
184 On completion of each DMA operation, the next in queue is started and
185 a tasklet triggered. The tasklet will then call the client driver
186 completion callback routine for notification, if set.
192 void dma_async_issue_pending(struct dma_chan *chan);
201 int dmaengine_terminate_sync(struct dma_chan *chan)
202 int dmaengine_terminate_async(struct dma_chan *chan)
203 int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */
205 This causes all activity for the DMA channel to be stopped, and may
206 discard data in the DMA FIFO which hasn't been fully transferred.
207 No callback functions will be called for any incomplete transfers.
209 Two variants of this function are available.
211 dmaengine_terminate_async() might not wait until the DMA has been fully
212 stopped or until any running complete callbacks have finished. But it is
213 possible to call dmaengine_terminate_async() from atomic context or from
214 within a complete callback. dmaengine_synchronize() must be called before it
215 is safe to free the memory accessed by the DMA transfer or free resources
216 accessed from within the complete callback.
218 dmaengine_terminate_sync() will wait for the transfer and any running
219 complete callbacks to finish before it returns. But the function must not be
220 called from atomic context or from within a complete callback.
222 dmaengine_terminate_all() is deprecated and should not be used in new code.
228 int dmaengine_pause(struct dma_chan *chan)
230 This pauses activity on the DMA channel without data loss.
236 int dmaengine_resume(struct dma_chan *chan)
238 Resume a previously paused DMA channel. It is invalid to resume a
239 channel which is not currently paused.
241 4. Check Txn complete
245 enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
246 dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
248 This can be used to check the status of the channel. Please see
249 the documentation in include/linux/dmaengine.h for a more complete
250 description of this API.
252 This can be used in conjunction with dma_async_is_complete() and
253 the cookie returned from dmaengine_submit() to check for
254 completion of a specific DMA transaction.
258 Not all DMA engine drivers can return reliable information for
259 a running DMA channel. It is recommended that DMA engine users
260 pause or stop (via dmaengine_terminate_all()) the channel before
263 5. Synchronize termination API
267 void dmaengine_synchronize(struct dma_chan *chan)
269 Synchronize the termination of the DMA channel to the current context.
271 This function should be used after dmaengine_terminate_async() to synchronize
272 the termination of the DMA channel to the current context. The function will
273 wait for the transfer and any running complete callbacks to finish before it
276 If dmaengine_terminate_async() is used to stop the DMA channel this function
277 must be called before it is safe to free memory accessed by previously
278 submitted descriptors or to free any resources accessed within the complete
279 callback of previously submitted descriptors.
281 The behavior of this function is undefined if dma_async_issue_pending() has
282 been called between dmaengine_terminate_async() and this function.