PhysMemAdapter::PhysMemAdapter() : mIonFd(-1), mFrameWidth(0), mFrameHeight(0), mBufferCount(0), mBufferSize(0), mFormat(0), mQueueableCount(0) { memset(mCameraBuffer, 0, sizeof(mCameraBuffer)); mIonFd = ion_open(); }
int getIonFd(gralloc_module_t const *module) { private_module_t* m = const_cast<private_module_t*>(reinterpret_cast<const private_module_t*>(module)); if (m->ionfd == -1) m->ionfd = ion_open(); return m->ionfd; }
MEMPLUGIN_ERRORTYPE MemPlugin_ION_Open(void *pMemPluginHandle,OMX_U32 *pClient) { MEMPLUGIN_ERRORTYPE eError = MEMPLUGIN_ERROR_NONE; OMX_U32 memClient = 0; memClient = ion_open(); if(memClient == 0) { DOMX_ERROR("ion open failed"); eError = MEMPLUGIN_ERROR_UNDEFINED; goto EXIT; } else { *pClient = memClient; } EXIT: if(eError != MEMPLUGIN_ERROR_NONE) { DOMX_EXIT("%s: failed with error %d",__FUNCTION__,eError); } else { DOMX_EXIT("%s: executed successfully",__FUNCTION__); } return eError; }
static int __attribute__((constructor)) so_init(void) { s_fd = ion_open(); s_pid = getpid(); ALOGD("pid = %d", s_pid); return 0; }
int ion_alloc_test(int count) { int fd, ret = 0, i, count_alloc; struct ion_handle **handle; fd = ion_open(); if (fd < 0) { printf("%s(): FAILED to open ion device\n", __func__); return -1; } handle = (struct ion_handle **)malloc(count * sizeof(struct ion_handle *)); if(handle == NULL) { printf("%s() : FAILED to allocate memory for ion_handles\n", __func__); return -ENOMEM; } /* Allocate ion_handles */ count_alloc = count; for(i = 0; i < count; i++) { ret = _ion_alloc_test(fd, &(handle[i])); printf("%s(): Alloc handle[%d]=%p\n", __func__, i, handle[i]); if(ret || ((int)handle[i] == -ENOMEM)) { printf("%s(): Alloc handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); count_alloc = i; goto err_alloc; } } err_alloc: /* Free ion_handles */ for (i = 0; i < count_alloc; i++) { printf("%s(): Free handle[%d]=%p\n", __func__, i, handle[i]); ret = ion_free(fd, handle[i]); if (ret) { printf("%s(): Free handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); } } ion_close(fd); free(handle); handle = NULL; if(ret || (count_alloc != count)) { printf("\nion alloc test: FAILED\n\n"); if(count_alloc != count) ret = -ENOMEM; } else printf("\nion alloc test: PASSED\n\n"); return ret; }
IonDmaMemManager::IonDmaMemManager(bool iommuEnabled) :MemManagerBase(), mPreviewData(NULL), mRawData(NULL), mJpegData(NULL), mVideoEncData(NULL), client_fd(-1), mIommuEnabled(iommuEnabled) { client_fd = ion_open(); }
int MemoryManager::initialize() { if ( mIonFd == -1 ) { mIonFd = ion_open(); if ( mIonFd < 0 ) { printe("ion_open() failed, error: %d", mIonFd); mIonFd = -1; return -1; } } return 0; }
int ion_phys(int fd, ion_user_handle_t handle, unsigned long *phys) { int ret; struct owl_ion_phys_data phys_data = { .handle = handle, }; struct ion_custom_data data = { .cmd = OWL_ION_GET_PHY, .arg = (unsigned long)&phys_data, }; ret = ion_ioctl(fd, ION_IOC_CUSTOM, &data); if (ret < 0) return ret; *phys = phys_data.phys_addr; return ret; } #endif int ion_count = 0; /*利用ion分配内存,成功返回0*/ int sys_mem_allocate(unsigned int size, void **vir_addr, ion_user_handle_t * p_ion_handle) { int ret; if (!ion_count) { ion_fd = ion_open(); if(ion_fd < 0){ printf("ion_open failed\n"); return -1; } printf("ion_open ok ion_fd = %d \n",ion_fd); } ret = ion_alloc(ion_fd, size, 0, 1,0, &ion_handle_t); if(ret) { printf("%s failed: %s\n", __func__, strerror(ret)); return -1; } *p_ion_handle = ion_handle_t; ret = ion_map(ion_fd, ion_handle_t, size, PROT_READ | PROT_WRITE, MAP_SHARED, 0, (unsigned char **)vir_addr, &ion_map_fd); if (ret){ printf("ion_map error \n"); return -1 ; } printf("ion_map ok \n"); ion_count++; return 0; }
int alloc_device_open(hw_module_t const *module, const char */*name*/, hw_device_t **device) { alloc_device_t *dev; dev = new alloc_device_t; if (NULL == dev) { return -1; } #if GRALLOC_ARM_UMP_MODULE ump_result ump_res = ump_open(); if (UMP_OK != ump_res) { AERR("UMP open failed with %d", ump_res); delete dev; return -1; } #endif /* initialize our state here */ memset(dev, 0, sizeof(*dev)); /* initialize the procs */ dev->common.tag = HARDWARE_DEVICE_TAG; dev->common.version = 0; dev->common.module = const_cast<hw_module_t *>(module); dev->common.close = alloc_device_close; dev->alloc = alloc_device_alloc; dev->free = alloc_device_free; #if GRALLOC_ARM_DMA_BUF_MODULE private_module_t *m = reinterpret_cast<private_module_t *>(dev->common.module); m->ion_client = ion_open(); if (m->ion_client < 0) { AERR("ion_open failed with %s", strerror(errno)); delete dev; return -1; } #endif *device = &dev->common; return 0; }
int _ion_alloc_test(int *fd, ion_user_handle_t *handle) { int ret; *fd = ion_open(); if (*fd < 0) return *fd; ret = ion_alloc(*fd, len, align, heap_id, alloc_flags, handle); if (ret) printf("%s failed: %s\n", __func__, strerror(ret)); return ret; }
TEST_F(Allocate, Zeroed) { void *zeroes = calloc(4096, 1); for (unsigned int heapMask : m_allHeaps) { SCOPED_TRACE(::testing::Message() << "heap " << heapMask); int fds[16]; for (unsigned int i = 0; i < 16; i++) { int map_fd = -1; ASSERT_EQ(0, ion_alloc_fd(m_ionFd, 4096, 0, heapMask, 0, &map_fd)); ASSERT_GE(map_fd, 0); void *ptr = NULL; ptr = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, map_fd, 0); ASSERT_TRUE(ptr != NULL); memset(ptr, 0xaa, 4096); ASSERT_EQ(0, munmap(ptr, 4096)); fds[i] = map_fd; } for (unsigned int i = 0; i < 16; i++) { ASSERT_EQ(0, close(fds[i])); } int newIonFd = ion_open(); int map_fd = -1; ASSERT_EQ(0, ion_alloc_fd(newIonFd, 4096, 0, heapMask, 0, &map_fd)); ASSERT_GE(map_fd, 0); void *ptr = NULL; ptr = mmap(NULL, 4096, PROT_READ, MAP_SHARED, map_fd, 0); ASSERT_TRUE(ptr != NULL); ASSERT_EQ(0, memcmp(ptr, zeroes, 4096)); ASSERT_EQ(0, munmap(ptr, 4096)); ASSERT_EQ(0, close(map_fd)); } free(zeroes); }
unsigned int IonGetAddr(void *handle) { unsigned int phy_adr=0; struct ion_handle *handle_ion; private_handle_t* hnd = NULL; SUNXI_hwcdev_context_t *Globctx = &gSunxiHwcDevice; Globctx->ion_fd = ion_open(); if( Globctx->ion_fd != -1 ) { hnd = (private_handle_t*)handle; ion_import(Globctx->ion_fd,hnd->share_fd, &handle_ion); phy_adr= (unsigned int)ion_getphyadr(Globctx->ion_fd,(void *)(handle_ion)); ion_sync_fd(Globctx->ion_fd,hnd->share_fd); ion_close(Globctx->ion_fd); Globctx->ion_fd = -1; } return phy_adr; }
int _ion_alloc_test(int *fd, struct ion_handle **handle) { int ret; *fd = ion_open(); if (*fd < 0) return *fd; if (tiler_test) ret = ion_alloc_tiler(*fd, width, height, fmt, alloc_flags, handle, &stride); else ret = ion_alloc(*fd, len, align, alloc_flags, handle); if (ret) printf("%s failed: %s\n", __func__, strerror(ret)); return ret; }
void check_pid() { struct actal_mem * user_p; // int ret = 0; //避免线程冲突 if (pthread_mutex_lock(&mutex) != 0) { ALOGE("get mutex failed"); return ; } if(s_pid != getpid()) { ALOGD("PID changed, reopen ion device"); ALOGD("parent pid = %d, fd = %d", s_pid, s_fd); if(s_top_p != NULL) { s_current_p = s_top_p->next; while((user_p = s_current_p) != NULL) { s_current_p = user_p->next; // ret = ion_free(user_p->fd, user_p->handle); munmap(user_p->ptr, user_p->len); // close(user_p->map_fd); free(user_p); user_p = NULL; } s_top_p->next = NULL; s_current_p = s_top_p; } ion_close(s_fd); s_fd = ion_open(); s_pid = getpid(); ALOGD("new pid = %d, fd = %d", s_pid, s_fd); } if (pthread_mutex_unlock(&mutex) != 0) { ALOGE("free mutex failed"); return ; } }
/*! * @brief Free specified memory * When user wants to free massive memory for the system, * they needs to fill the physical address and size to be freed * in buff structure. * * @param buff the structure containing memory information to be freed; * * @return * @li 0 Freeing memory success. * @li -1 Freeing memory failure. */ int _IOFreePhyMem(int which, vpu_mem_desc * buff) { #ifdef BUILD_FOR_ANDROID #ifdef USE_ION #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 10, 0) int shared_fd; #else struct ion_handle *handle; #endif int fd; if (!buff || !(buff->size) || ((unsigned long)buff->cpu_addr == 0)) { err_msg("Error!_IOFreePhyMem:Invalid parameters"); return -1; } if (which != VPU_IOC_PHYMEM_FREE) { err_msg("Error!_IOFreePhyMem unsupported memtype: %d",which); return -1; } #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 10, 0) shared_fd = buff->cpu_addr; #else handle = (struct ion_handle *)buff->cpu_addr; #endif fd = ion_open(); if (fd <= 0) { err_msg("ion open failed!\n"); return -1; } #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 10, 0) ion_close(shared_fd); info_msg("<ion> free handle: 0x%x, paddr: 0x%x, vaddr: 0x%x", (unsigned int)shared_fd, (unsigned int)buff->phy_addr, (unsigned int)buff->virt_uaddr); #else ion_free(fd, handle); info_msg("<ion> free handle: 0x%x, paddr: 0x%x, vaddr: 0x%x", (unsigned int)handle, (unsigned int)buff->phy_addr, (unsigned int)buff->virt_uaddr); #endif ion_close(fd); munmap((void *)buff->virt_uaddr, buff->size); memset((void*)buff, 0, sizeof(*buff)); #elif USE_GPU struct g2d_buf *gbuf = (struct g2d_buf *)buff->cpu_addr; if(gbuf) { if(g2d_free(gbuf) != 0) { err_msg("%s: gpu allocator failed to free buffer 0x%x", __FUNCTION__, (unsigned int)gbuf); return -1; } info_msg("<gpu> free handle: 0x%x, paddr: 0x%x, vaddr: 0x%x", (unsigned int)gbuf, (unsigned int)buff->phy_addr, (unsigned int)buff->virt_uaddr); } memset((void*)buff, 0, sizeof(*buff)); #else int fd_pmem; if (!buff || !(buff->size) || ((int)buff->cpu_addr <= 0)) { err_msg("Error!_IOFreePhyMem:Invalid parameters"); return -1; } if (which != VPU_IOC_PHYMEM_FREE) { err_msg("Error!_IOFreePhyMem unsupported memtype: %d",which); return -1; } fd_pmem = (int)buff->cpu_addr; if(fd_pmem) { munmap((void *)buff->virt_uaddr, buff->size); close(fd_pmem); } memset((void*)buff, 0, sizeof(*buff)); #endif #else if (buff->phy_addr != 0) { dprintf(3, "%s: phy addr = %08lx\n", __func__, buff->phy_addr); ioctl(vpu_fd, which, buff); } sz_alloc -= buff->size; dprintf(3, "%s: total=%d\n", __func__, sz_alloc); memset(buff, 0, sizeof(*buff)); #endif return 0; }
int _IOGetPhyMem(int which, vpu_mem_desc *buff) { #ifdef BUILD_FOR_ANDROID const size_t pagesize = getpagesize(); int err, fd; #ifdef USE_ION #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 10, 0) ion_user_handle_t handle; #else struct ion_handle *handle; #endif int share_fd, ret = -1; unsigned char *ptr; #elif USE_GPU struct g2d_buf *gbuf; int bytes; #else /* Get memory from pmem space for android */ struct pmem_region region; #endif if ((!buff) || (!buff->size)) { err_msg("Error!_IOGetPhyMem:Invalid parameters"); return -1; } buff->cpu_addr = 0; buff->phy_addr = 0; buff->virt_uaddr = 0; if (which == VPU_IOC_GET_WORK_ADDR) { if (ioctl(vpu_fd, which, buff) < 0) { err_msg("mem allocation failed!\n"); buff->phy_addr = 0; buff->cpu_addr = 0; return -1; } return 0; } if (which != VPU_IOC_PHYMEM_ALLOC) { err_msg("Error!_IOGetPhyMem unsupported memtype: %d", which); return -1; } buff->size = (buff->size + pagesize-1) & ~(pagesize - 1); #ifdef USE_ION fd = ion_open(); if (fd <= 0) { err_msg("ion open failed!\n"); return -1; } #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 10, 0) err = ion_alloc(fd, buff->size, pagesize, 1, 0, &handle); #else err = ion_alloc(fd, buff->size, pagesize, 1, &handle); #endif if (err) { err_msg("ion allocation failed!\n"); goto error; } err = ion_map(fd, handle, buff->size, PROT_READ|PROT_WRITE, MAP_SHARED, 0, &ptr, &share_fd); if (err) { err_msg("ion map failed!\n"); goto error; } err = ion_phys(fd, handle); if (err == 0) { err_msg("ion get physical address failed!\n"); goto error; } buff->virt_uaddr = (unsigned long)ptr; buff->phy_addr = (unsigned long)err; #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 10, 0) ion_free(fd, handle); buff->cpu_addr = (unsigned long)share_fd; #else buff->cpu_addr = (unsigned long)handle; #endif memset((void*)buff->virt_uaddr, 0, buff->size); ret = 0; info_msg("<ion> alloc handle: 0x%x, paddr: 0x%x, vaddr: 0x%x", (unsigned int)handle, (unsigned int)buff->phy_addr, (unsigned int)buff->virt_uaddr); error: #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 10, 0) close(share_fd); #endif ion_close(fd); return ret; #elif USE_GPU bytes = buff->size + PAGE_SIZE; gbuf = g2d_alloc(bytes, 0); if(!gbuf) { err_msg("%s: gpu allocator failed to alloc buffer with size %d", __FUNCTION__, buff->size); return -1; } buff->virt_uaddr = (unsigned long)gbuf->buf_vaddr; buff->phy_addr = (unsigned long)gbuf->buf_paddr; buff->cpu_addr = (unsigned long)gbuf; //vpu requires page alignment for the address implicitly, round it to page edge buff->virt_uaddr = (buff->virt_uaddr + PAGE_SIZE -1) & ~(PAGE_SIZE -1); buff->phy_addr = (buff->phy_addr + PAGE_SIZE -1) & ~(PAGE_SIZE -1); memset((void*)buff->virt_uaddr, 0, buff->size); info_msg("<gpu> alloc handle: 0x%x, paddr: 0x%x, vaddr: 0x%x", (unsigned int)gbuf, (unsigned int)buff->phy_addr, (unsigned int)buff->virt_uaddr); return 0; #else fd = (unsigned long)open("/dev/pmem_adsp", O_RDWR | O_SYNC); if (fd < 0) { err_msg("Error!_IOGetPhyMem Error,cannot open pmem"); return -1; } err = ioctl(fd, PMEM_GET_TOTAL_SIZE, ®ion); buff->virt_uaddr = (unsigned long)mmap(0, buff->size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); if (buff->virt_uaddr == (unsigned long)MAP_FAILED) { err_msg("Error!mmap(fd=%d, size=%u) failed (%s)", fd, buff->size, strerror(errno)); close(fd); return -1; } memset(®ion, 0, sizeof(region)); if (ioctl(fd, PMEM_GET_PHYS, ®ion) == -1) { err_msg("Error!Failed to get physical address of source!"); munmap((void *)buff->virt_uaddr, buff->size); close(fd); return -1; } buff->phy_addr = (unsigned long)region.offset; buff->cpu_addr = (unsigned long)fd; memset((void*)buff->virt_uaddr, 0, buff->size); #endif #else if (ioctl(vpu_fd, which, buff) < 0) { err_msg("mem allocation failed!\n"); buff->phy_addr = 0; buff->cpu_addr = 0; return -1; } sz_alloc += buff->size; dprintf(3, "%s: phy addr = %08lx\n", __func__, buff->phy_addr); dprintf(3, "%s: alloc=%d, total=%d\n", __func__, buff->size, sz_alloc); #endif return 0; }
//----------------------------------------------------------------------------- /////////////////////////////////////////////////////////////////////// ///We do increase global and local count first, then judge we have to initial m4uDrv/ion_dev and m4uPort according /// local count and global count repectively. MBOOL IMemDrvImp::init(void) { MBOOL Result = MTRUE; MINT32 ret = 0; ISP_REF_CNT_CTRL_STRUCT ref_cnt; // Mutex::Autolock lock(mLock); // #if defined(_use_kernel_ref_cnt_) // if ( mIspFd < 0 ) { mIspFd = open(ISP_DEV_NAME, O_RDONLY); if (mIspFd < 0) // 1st time open failed. { IMEM_ERR("ISP kernel open fail, errno(%d):%s.", errno, strerror(errno)); Result = MFALSE; goto EXIT; } } // IMEM_DBG("use kernel ref. cnt.mIspFd(%d)",mIspFd); /////////////////////////////////////////////// //increase global and local count first ref_cnt.ctrl = ISP_REF_CNT_INC; ref_cnt.id = ISP_REF_CNT_ID_IMEM; ref_cnt.data_ptr = (MUINT32)&mInitCount; ret = ioctl(mIspFd,ISP_REF_CNT_CTRL,&ref_cnt); if(ret < 0) { IMEM_ERR("ISP_REF_CNT_INC fail(%d)[errno(%d):%s] \n",ret, errno, strerror(errno)); Result = MFALSE; goto EXIT; } android_atomic_inc(&mLocal_InitCount); IMEM_DBG("#flag2# mInitCount(%d),mInitCount>0 and run _use_kernel_ref_cnt_\n",mInitCount); #else IMEM_DBG("mInitCount(%d) ", mInitCount); IMEM_DRV_DELAY android_atomic_inc(&mInitCount); //IMEM_DBG("#flag3# mInitCount(%d),mInitCount>0 and run w\o _use_kernel_ref_cnt_\n",mInitCount); #endif IMEM_INF("mInitCount(%d) mLocal_InitCount(%d) ", mInitCount, mLocal_InitCount); ////////////////////////////////////////// //init. buf_map //erase all buf_map.clear(); //actually do nothing. // #if defined (__ISP_USE_PMEM__) // #elif defined (__ISP_USE_STD_M4U__) || defined (__ISP_USE_ION__) ////////////////////////////////////////////////////// // we initial m4udrv and open ion device when local count is 1, // and config m4v ports when global count is 1 if(mLocal_InitCount==1) { gDumpIMemcLog = checkDumpIMem(); mpM4UDrv = new MTKM4UDrv(); #if defined (__ISP_USE_ION__) mIonDrv = ion_open(); if (mIonDrv < 0) { IMEM_ERR("ion device open FAIL "); return MFALSE; } IMEM_INF("open ion id(%d).\n", mIonDrv); #endif //if(mInitCount==1) { IMEM_INF("do enable_m4u_fun for M4U_CLNTMOD_CAM "); mpM4UDrv->m4u_enable_m4u_func(M4U_CLNTMOD_CAM); // M4U_PORT_STRUCT port; port.Virtuality = 1; port.Security = 0; port.domain = 3; port.Distance = 1; port.Direction = 0; //M4U_DMA_READ_WRITE // port.ePortID = M4U_PORT_CAM_IMGO; ret = mpM4UDrv->m4u_config_port(&port); port.ePortID = M4U_PORT_CAM_IMG2O; ret = mpM4UDrv->m4u_config_port(&port); port.ePortID = M4U_PORT_CAM_LSCI; ret = mpM4UDrv->m4u_config_port(&port); port.ePortID = M4U_PORT_CAM_IMGI; ret = mpM4UDrv->m4u_config_port(&port); port.ePortID = M4U_PORT_CAM_ESFKO; ret = mpM4UDrv->m4u_config_port(&port); port.ePortID = M4U_PORT_CAM_AAO; ret = mpM4UDrv->m4u_config_port(&port); }//match if global count }//match if local count #endif // EXIT: if(!Result) { } return Result; }
/*--------------------MemoryManager Class STARTS here-----------------------------*/ void* MemoryManager::allocateBuffer(int width, int height, const char* format, int &bytes, int numBufs) { LOG_FUNCTION_NAME; if (mIonFd == 0) { mIonFd = ion_open(); if (mIonFd == 0) { LOGE("ion_open failed!!!"); return NULL; } } ///We allocate numBufs+1 because the last entry will be marked NULL to indicate end of array, which is used when freeing ///the buffers const uint numArrayEntriesC = (uint)(numBufs + 1); ///Allocate a buffer array uint32_t *bufsArr = new uint32_t[numArrayEntriesC]; if (!bufsArr) { LOGE( "Allocation failed when creating buffers array of %d uint32_t elements", numArrayEntriesC); LOG_FUNCTION_NAME_EXIT; return NULL; } ///Initialize the array with zeros - this will help us while freeing the array in case of error ///If a value of an array element is NULL, it means we didnt allocate it memset(bufsArr, 0, sizeof(*bufsArr) * numArrayEntriesC); //2D Allocations are not supported currently if (bytes != 0) { struct ion_handle *handle; int mmap_fd; ///1D buffers for (int i = 0; i < numBufs; i++) { int ret = ion_alloc(mIonFd, bytes, 0, 1 << ION_HEAP_TYPE_CARVEOUT, &handle); if (ret < 0) { LOGE("ion_alloc resulted in error %d", ret); goto error; } LOGE("Before mapping, handle = %x, nSize = %d", handle, bytes); if ((ret = ion_map(mIonFd, handle, bytes, PROT_READ | PROT_WRITE, MAP_SHARED, 0, (unsigned char**) &bufsArr[i], &mmap_fd)) < 0) { LOGE("Userspace mapping of ION buffers returned error %d", ret); ion_free(mIonFd, handle); goto error; } mIonHandleMap.add(bufsArr[i], (unsigned int) handle); mIonFdMap.add(bufsArr[i], (unsigned int) mmap_fd); mIonBufLength.add(bufsArr[i], (unsigned int) bytes); } } else // If bytes is not zero, then it is a 2-D tiler buffer request { } LOG_FUNCTION_NAME_EXIT; return (void*) bufsArr; error: LOGE("Freeing buffers already allocated after error occurred"); freeBuffer(bufsArr); if (NULL != mErrorNotifier.get()) { mErrorNotifier->errorNotify(-ENOMEM); } LOG_FUNCTION_NAME_EXIT; return NULL; }
static int gralloc_register_buffer(gralloc_module_t const *module, buffer_handle_t handle) { MALI_IGNORE(module); if (private_handle_t::validate(handle) < 0) { AERR("Registering invalid buffer 0x%p, returning error", handle); return -EINVAL; } // if this handle was created in this process, then we keep it as is. private_handle_t *hnd = (private_handle_t *)handle; int retval = -EINVAL; pthread_mutex_lock(&s_map_lock); #if GRALLOC_ARM_UMP_MODULE if (!s_ump_is_open) { ump_result res = ump_open(); // MJOLL-4012: UMP implementation needs a ump_close() for each ump_open if (res != UMP_OK) { pthread_mutex_unlock(&s_map_lock); AERR("Failed to open UMP library with res=%d", res); return retval; } s_ump_is_open = 1; } #endif hnd->pid = getpid(); if (hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER) { AERR("Can't register buffer 0x%p as it is a framebuffer", handle); } else if (hnd->flags & private_handle_t::PRIV_FLAGS_USES_UMP) { #if GRALLOC_ARM_UMP_MODULE hnd->ump_mem_handle = (int)ump_handle_create_from_secure_id(hnd->ump_id); if (UMP_INVALID_MEMORY_HANDLE != (ump_handle)hnd->ump_mem_handle) { hnd->base = ump_mapped_pointer_get((ump_handle)hnd->ump_mem_handle); if (0 != hnd->base) { hnd->lockState = private_handle_t::LOCK_STATE_MAPPED; hnd->writeOwner = 0; hnd->lockState = 0; pthread_mutex_unlock(&s_map_lock); return 0; } else { AERR("Failed to map UMP handle 0x%x", hnd->ump_mem_handle); } ump_reference_release((ump_handle)hnd->ump_mem_handle); } else { AERR("Failed to create UMP handle 0x%x", hnd->ump_mem_handle); } #else AERR("Gralloc does not support UMP. Unable to register UMP memory for handle 0x%p", hnd); #endif } else if (hnd->flags & private_handle_t::PRIV_FLAGS_USES_ION) { #if GRALLOC_ARM_DMA_BUF_MODULE int ret; unsigned char *mappedAddress; size_t size = hnd->size; hw_module_t *pmodule = NULL; private_module_t *m = NULL; if (hw_get_module(GRALLOC_HARDWARE_MODULE_ID, (const hw_module_t **)&pmodule) == 0) { m = reinterpret_cast<private_module_t *>(pmodule); } else { AERR("Could not get gralloc module for handle: 0x%p", hnd); retval = -errno; goto cleanup; } /* the test condition is set to m->ion_client <= 0 here, because: * 1) module structure are initialized to 0 if no initial value is applied * 2) a second user process should get a ion fd greater than 0. */ if (m->ion_client <= 0) { /* a second user process must obtain a client handle first via ion_open before it can obtain the shared ion buffer*/ m->ion_client = ion_open(); if (m->ion_client < 0) { AERR("Could not open ion device for handle: 0x%p", hnd); retval = -errno; goto cleanup; } } mappedAddress = (unsigned char *)mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, hnd->share_fd, 0); if (MAP_FAILED == mappedAddress) { AERR("mmap( share_fd:%d ) failed with %s", hnd->share_fd, strerror(errno)); retval = -errno; goto cleanup; } hnd->base = mappedAddress + hnd->offset; pthread_mutex_unlock(&s_map_lock); return 0; #endif } else { AERR("registering non-UMP buffer not supported. flags = %d", hnd->flags); } cleanup: pthread_mutex_unlock(&s_map_lock); return retval; }
void ion_share_test() { ion_user_handle_t handle; int sd[2]; int num_fd = 1; struct iovec count_vec = { .iov_base = &num_fd, .iov_len = sizeof num_fd, }; char buf[CMSG_SPACE(sizeof(int))]; socketpair(AF_UNIX, SOCK_STREAM, 0, sd); if (fork()) { struct msghdr msg = { .msg_control = buf, .msg_controllen = sizeof buf, .msg_iov = &count_vec, .msg_iovlen = 1, }; struct cmsghdr *cmsg; int fd, share_fd, ret; char *ptr; /* parent */ if(_ion_alloc_test(&fd, &handle)) return; ret = ion_share(fd, handle, &share_fd); if (ret) printf("share failed %s\n", strerror(errno)); ptr = mmap(NULL, len, prot, map_flags, share_fd, 0); if (ptr == MAP_FAILED) { return; } strcpy(ptr, "master"); cmsg = CMSG_FIRSTHDR(&msg); cmsg->cmsg_level = SOL_SOCKET; cmsg->cmsg_type = SCM_RIGHTS; cmsg->cmsg_len = CMSG_LEN(sizeof(int)); *(int *)CMSG_DATA(cmsg) = share_fd; /* send the fd */ printf("master? [%10s] should be [master]\n", ptr); printf("master sending msg 1\n"); sendmsg(sd[0], &msg, 0); if (recvmsg(sd[0], &msg, 0) < 0) perror("master recv msg 2"); printf("master? [%10s] should be [child]\n", ptr); /* send ping */ sendmsg(sd[0], &msg, 0); printf("master->master? [%10s]\n", ptr); if (recvmsg(sd[0], &msg, 0) < 0) perror("master recv 1"); } else { struct msghdr msg; struct cmsghdr *cmsg; char* ptr; int fd, recv_fd; char* child_buf[100]; /* child */ struct iovec count_vec = { .iov_base = child_buf, .iov_len = sizeof child_buf, }; struct msghdr child_msg = { .msg_control = buf, .msg_controllen = sizeof buf, .msg_iov = &count_vec, .msg_iovlen = 1, }; if (recvmsg(sd[1], &child_msg, 0) < 0) perror("child recv msg 1"); cmsg = CMSG_FIRSTHDR(&child_msg); if (cmsg == NULL) { printf("no cmsg rcvd in child"); return; } recv_fd = *(int*)CMSG_DATA(cmsg); if (recv_fd < 0) { printf("could not get recv_fd from socket"); return; } printf("child %d\n", recv_fd); fd = ion_open(); ptr = mmap(NULL, len, prot, map_flags, recv_fd, 0); if (ptr == MAP_FAILED) { return; } printf("child? [%10s] should be [master]\n", ptr); strcpy(ptr, "child"); printf("child sending msg 2\n"); sendmsg(sd[1], &child_msg, 0); } } int main(int argc, char* argv[]) { int c; enum tests { ALLOC_TEST = 0, MAP_TEST, SHARE_TEST, }; while (1) { static struct option opts[] = { {"alloc", no_argument, 0, 'a'}, {"alloc_flags", required_argument, 0, 'f'}, {"map", no_argument, 0, 'm'}, {"share", no_argument, 0, 's'}, {"len", required_argument, 0, 'l'}, {"align", required_argument, 0, 'g'}, {"map_flags", required_argument, 0, 'z'}, {"prot", required_argument, 0, 'p'}, {"width", required_argument, 0, 'w'}, {"height", required_argument, 0, 'h'}, }; int i = 0; c = getopt_long(argc, argv, "af:h:l:mr:stw:", opts, &i); if (c == -1) break; switch (c) { case 'l': len = atol(optarg); break; case 'g': align = atol(optarg); break; case 'z': map_flags = 0; map_flags |= strstr(optarg, "PROT_EXEC") ? PROT_EXEC : 0; map_flags |= strstr(optarg, "PROT_READ") ? PROT_READ: 0; map_flags |= strstr(optarg, "PROT_WRITE") ? PROT_WRITE: 0; map_flags |= strstr(optarg, "PROT_NONE") ? PROT_NONE: 0; break; case 'p': prot = 0; prot |= strstr(optarg, "MAP_PRIVATE") ? MAP_PRIVATE : 0; prot |= strstr(optarg, "MAP_SHARED") ? MAP_PRIVATE : 0; break; case 'f': alloc_flags = atol(optarg); break; case 'a': test = ALLOC_TEST; break; case 'm': test = MAP_TEST; break; case 's': test = SHARE_TEST; break; case 'w': width = atol(optarg); break; case 'h': height = atol(optarg); break; } } printf("test %d, len %u, width %u, height %u align %u, " "map_flags %d, prot %d, alloc_flags %d\n", test, len, width, height, align, map_flags, prot, alloc_flags); switch (test) { case ALLOC_TEST: ion_alloc_test(); break; case MAP_TEST: ion_map_test(); break; case SHARE_TEST: ion_share_test(); break; default: printf("must specify a test (alloc, map, share)\n"); } return 0; }
/** * Go on allocating buffers of specified size & type, untill the allocation fails. * Then free 10 buffers and allocate 10 buffers again. */ int ion_alloc_fail_alloc_test() { int fd, ret = 0, i; struct ion_handle **handle; const int COUNT_ALLOC_MAX = 200; const int COUNT_REALLOC_MAX = 10; int count_alloc = COUNT_ALLOC_MAX, count_realloc = COUNT_ALLOC_MAX; fd = ion_open(); if (fd < 0) { printf("%s(): FAILED to open ion device\n", __func__); return -1; } handle = (struct ion_handle **)malloc(COUNT_ALLOC_MAX * sizeof(struct ion_handle *)); if(handle == NULL) { printf("%s(): FAILED to allocate memory for ion_handles\n", __func__); return -ENOMEM; } /* Allocate ion_handles as much as possible */ for(i = 0; i < COUNT_ALLOC_MAX; i++) { ret = _ion_alloc_test(fd, &(handle[i])); printf("%s(): Alloc handle[%d]=%p\n", __func__, i, handle[i]); if(ret || ((int)handle[i] == -ENOMEM)) { printf("%s(): Alloc handle[%d]=%p FAILED, err:%s\n\n", __func__, i, handle[i], strerror(ret)); count_alloc = i; break; } } /* Free COUNT_REALLOC_MAX ion_handles */ for (i = count_alloc-1; i > (count_alloc-1 - COUNT_REALLOC_MAX); i--) { printf("%s(): Free handle[%d]=%p\n", __func__, i, handle[i]); ret = ion_free(fd, handle[i]); if (ret) { printf("%s(): Free handle[%d]=%p FAILED, err:%s\n\n", __func__, i, handle[i], strerror(ret)); } } /* Again allocate COUNT_REALLOC_MAX ion_handles to test that we are still able to allocate */ for(i = (count_alloc - COUNT_REALLOC_MAX); i < count_alloc; i++) { ret = _ion_alloc_test(fd, &(handle[i])); printf("%s(): Alloc handle[%d]=%p\n", __func__, i, handle[i]); if(ret || ((int)handle[i] == -ENOMEM)) { printf("%s(): Alloc handle[%d]=%p FAILED, err:%s\n\n", __func__, i, handle[i], strerror(ret)); count_realloc = i; goto err_alloc; } } count_realloc = i; err_alloc: /* Free all ion_handles */ for (i = 0; i < count_alloc; i++) { printf("%s(): Free handle[%d]=%p\n", __func__, i, handle[i]); ret = ion_free(fd, handle[i]); if (ret) { printf("%s(): Free handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); } } ion_close(fd); free(handle); handle = NULL; printf("\ncount_alloc=%d, count_realloc=%d\n",count_alloc, count_realloc); if(ret || (count_alloc != count_realloc)) { printf("\nion alloc->fail->alloc test: FAILED\n\n"); if(count_alloc != COUNT_ALLOC_MAX) ret = -ENOMEM; } else printf("\nion alloc->fail->alloc test: PASSED\n\n"); return ret; }
int ion_map_test(int count) { int fd, ret = 0, i, count_alloc, count_map; struct ion_handle **handle; unsigned char **ptr; int *map_fd; fd = ion_open(); if (fd < 0) { printf("%s(): FAILED to open ion device\n", __func__); return -1; } handle = (struct ion_handle **)malloc(count * sizeof(struct ion_handle *)); if(handle == NULL) { printf("%s(): FAILED to allocate memory for ion_handles\n", __func__); return -ENOMEM; } count_alloc = count; count_map = count; /* Allocate ion_handles */ for(i = 0; i < count; i++) { ret = _ion_alloc_test(fd, &(handle[i])); printf("%s(): Alloc handle[%d]=%p\n", __func__, i, handle[i]); if(ret || ((int)handle[i] == -ENOMEM)) { printf("%s(): Alloc handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); count_alloc = i; goto err_alloc; } } /* Map ion_handles and validate */ if (tiler_test) len = height * stride; ptr = (unsigned char **)malloc(count * sizeof(unsigned char **)); map_fd = (int *)malloc(count * sizeof(int *)); for(i = 0; i < count; i++) { /* Map ion_handle on userside */ ret = ion_map(fd, handle[i], len, prot, map_flags, 0, &(ptr[i]), &(map_fd[i])); printf("%s(): Map handle[%d]=%p, map_fd=%d, ptr=%p\n", __func__, i, handle[i], map_fd[i], ptr[i]); if(ret) { printf("%s Map handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); count_map = i; goto err_map; } /* Validate mapping by writing the data and reading it back */ if (tiler_test) _ion_tiler_map_test(ptr[i]); else _ion_map_test(ptr[i]); } /* clean up properly */ err_map: for(i = 0; i < count_map; i++) { /* Unmap ion_handles */ ret = munmap(ptr[i], len); printf("%s(): Unmap handle[%d]=%p, map_fd=%d, ptr=%p\n", __func__, i, handle[i], map_fd[i], ptr[i]); if(ret) { printf("%s(): Unmap handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); goto err_map; } /* Close fds */ close(map_fd[i]); } free(map_fd); free(ptr); err_alloc: /* Free ion_handles */ for (i = 0; i < count_alloc; i++) { printf("%s(): Free handle[%d]=%p\n", __func__, i, handle[i]); ret = ion_free(fd, handle[i]); if (ret) { printf("%s(): Free handle[%d]=%p FAILED, err:%s\n", __func__, i, handle[i], strerror(ret)); } } ion_close(fd); free(handle); handle = NULL; if(ret || (count_alloc != count) || (count_map != count)) { printf("\nion map test: FAILED\n\n"); if((count_alloc != count) || (count_map != count)) ret = -ENOMEM; } else printf("\nion map test: PASSED\n"); return ret; }
int ion_m4u_misc_using() { int i; int ion_fd; int ion_test_fd; ion_user_handle_t handle; int share_fd; volatile char* pBuf; pid_t pid; unsigned int bufsize = 1*1024*1024; ion_fd = ion_open(); if (ion_fd < 0) { printf("Cannot open ion device.\n"); return 0; } if (ion_alloc_mm(ion_fd, bufsize, 4, 0, &handle)) { printf("IOCTL[ION_IOC_ALLOC] failed!\n"); return 0; } if (ion_share(ion_fd, handle, &share_fd)) { printf("IOCTL[ION_IOC_SHARE] failed!\n"); return 0; } pBuf = (char*)ion_mmap(ion_fd, NULL, bufsize, PROT_READ|PROT_WRITE, MAP_SHARED, share_fd, 0); printf("ion_map: pBuf = 0x%lx\n", (unsigned long)pBuf); if (!pBuf) { printf("Cannot map ion buffer.\n"); return 0; } MTKM4UDrv CM4u; unsigned int BufMVA; int ret; ret = CM4u.m4u_alloc_mva(0, (unsigned long)pBuf, bufsize, M4U_PROT_READ|M4U_PROT_WRITE, M4U_FLAGS_SEQ_ACCESS, &BufMVA); if(ret) { printf("allocate mva fail. ret=0x%x\n", ret); return ret; } printf("mva=0x%x\n", BufMVA); ret = CM4u.m4u_cache_sync(0, M4U_CACHE_FLUSH_BY_RANGE, (unsigned long)pBuf,bufsize, BufMVA); if(ret) { printf("cache flush fail. ret=%d,va=0x%lx,size=0x%x\n", ret,(unsigned long)pBuf,bufsize); return ret; } ret = CM4u.m4u_dealloc_mva(0, (unsigned long)pBuf,bufsize, BufMVA); if(ret) { printf("m4u_dealloc_mva fail. ret=%d, mva=0x%x\n", ret, BufMVA); } return 0; }