Job Title | Budget | ||||
---|---|---|---|---|---|
Robotics Developer for Vision-Based Robotic Arm Project
|
~147 - 442 USD | 39 minutes ago |
Client Rank
- Medium
$325 total spent
2 hires
2 open job
5.00
of 1 reviews
Registered at: 06/10/2014
India
|
||
I need 2 (robotics, android, machine learning) developer from KOLKATA location to work on my project.
I am developing a robotic arm project, its basically controlling the stepper motor using arduino, raspberry pi and the requirement of the project is to control the stepper motor at each joint of robotic arm based on video input coming from a camera- here comes the machine learning part- using openCV the project need to identify the location of an object and do some complex stuff- an example would be to pick an object. I have all the electronics parts with me. The actual project idea will be shared with the selected developer only. This is going to be a long term project till we get a funding. As I am at initial stage of the start up, the salary provided would be less, you would get lots of knowledge. As this work requires your offline presence, initially you have to work from a rented small office or some place where all can work together for some initial days to understand the project, if the later work doesn't require your presence you can work from home as well. I am looking for fresh graduate or junior developer who are eager to learn and okay with less payment. At the same time I don't guarantee permanent job for longer time if the project fails, I hope it doesn't. If you want to be part of this journey and feel your expertise align with the work mentioned please apply. Please note that its not simple development of industrial robotic arm rather robotic arm is the base, based on which the project will be developed. Max Salary I can offer : 10k monthly Thanks. The ideal candidate should have experience with arduino, raspberry pi, python, stable diffusion, opencv, basics of ML, understanding of inverse kinematics math. The project is expected to last for 6-12 months. Skills: Python, Arduino, Raspberry Pi, OpenCV, Stable Diffusion
Fixed budget:
12,500 - 37,500 INR
39 minutes ago
|
|||||
Face recognition
|
20 - 30 USD
/ hr
|
1 hour ago |
Client Rank
- Good
$1'029 total spent
3 hires
, 2 active
2 jobs posted
100% hire rate,
3 open job
12.00 /hr avg hourly rate paid
7 hours
5.00
of 1 reviews
Registered at: 30/10/2024
United States
|
||
Required Connects: 19
Need a person who has experience in Gaze detection. Who can read facial expressions and eye movement from video file
Skills: Tools/Models: OpenGaze, GazeML, or pre-trained models available in MediaPipe., AWS Amplify, Expertise in handling video and audio files, Computer Vision, Machine Learning, Python, Neural Network, Image Processing, OpenCV
Hourly rate:
20 - 30 USD
1 hour ago
|
|||||
Computer vision/object detection
|
13 - 17 USD
/ hr
|
1 hour ago |
Client Rank
- Medium
9 jobs posted
22% hire rate,
2 open job
Registered at: 22/11/2024
Sweden
|
||
Required Connects: 15
I am looking for help with a computer vision project. It is mostly available on a GitHub page, but I need some modifications and additions to meet my needs.
The application is based on "Multiply" from GitHub. It currently works with sequences involving 2+ actors, but I need it to work with sequences involving a single actor as well as multiple actors. I would prefer the application to work with JPG sequences only. FFmpeg integration is not needed. The preprocessing stage must include the ability to export camera and SMPL-X data to FBX format and also to import camera and SMPL-X data from FBX. This is a must-have, as opposed to being limited to forced trace input → training. We can use the example data available on GitHub to start, and I can also provide data for single-actor sequences. For preprocessing, I want to integrate ViTPose and OpenPose to improve accuracy. FBX import and export should be the preferred method for handling camera and SMPL-X data. I will handle the custom SMPL-X Maya rig and cameras on my side. If the trace camera and SMPL-X data can be exported to me as FBX (from the preprocessing stage), I will set up my system to parse the data into the trainer. The goal is to ensure compatibility with my workflow, allowing seamless export/import of camera and SMPL-X data via FBX while enhancing the accuracy of preprocessing with ViTPose and OpenPose.
Skills: Computer Vision, TensorFlow, Deep Learning, Machine Learning, OpenCV, Python, Neural Network, Artificial Intelligence
Hourly rate:
13 - 17 USD
1 hour ago
|
|||||
Unreal Engine 5 Plugin Development for Spectral Rendering
|
50,000 USD | 1 hour ago |
Client Rank
- Risky
1 open job
Registered at: 26/11/2024
France
|
||
Required Connects: 10
We are seeking an experienced developer to create a specialized plugin for Unreal Engine 5 focused on spectral rendering. The ideal candidate should have a strong background in graphics programming and experience with Unreal Engine's API. This project will involve designing and implementing features that enhance the visual fidelity of rendered scenes using spectral techniques. If you are passionate about cutting-edge rendering technology and have a portfolio showcasing relevant projects, we would love to hear from you.
Skills: Unreal Engine, C++, Game Design, 3D Modeling, OpenCV
Fixed budget:
50,000 USD
1 hour ago
|
|||||
Data Science / Computer Vision expert
|
not specified | 5 hours ago |
Client Rank
- Excellent
$900'521 total spent
167 hires
, 45 active
344 jobs posted
49% hire rate,
1 open job
15.90 /hr avg hourly rate paid
55406 hours
4.50
of 136 reviews
Registered at: 20/07/2015
United States
|
||
Required Connects: 19
NO AGENCIES PLEASE
I am looking for developers (more than one) for several ongoing projects, requiring skills in writing algorithms for Looking for a Python ML/CV developer who has experience in the following areas: - OpenCV - camera calibration and 3D reconstruction - streaming video processing (GStreamer, ffmpeg experience would be a plus) - Tensorflow and Keras (convolutional neural networks) - examples of your code in any of the areas would be a huge advantage It would be an advantage: - Published scientific articles related to the ML/CV field - Python multiprocessing/multithreading advanced techniques - C++/Python integration - PyTorch - Camera calibration - Objects detection - Objects 3D position estimation - Streaming video (GStreamer, ffmpeg) I am looking for someone who is communicative, gives suggestions, asks questions, and understands the product delivery requirements. While responding, please write about the kind of AI projects you have done with the features and some insights on your experience with Camera calibration, Objects detection, object 3D position estimation, Streaming video (GStreamer, ffmpeg)
Skills: Data Analysis, Python, Data Science, Data Scraping, TensorFlow, Machine Learning, Deep Learning, Natural Language Processing, Neural Network, Artificial Neural Network, Deep Neural Network, OpenCV
Budget:
not specified
5 hours ago
|
|||||
Computer Vision Developer Needed for Project
|
1,000 USD | 9 hours ago |
Client Rank
- Risky
1 open job
China
|
||
Required Connects: 9
We are seeking an experienced Computer Vision Developer to join our team for a fixed-price project. The ideal candidate should have a strong background in image processing, machine learning, and algorithm development. Your contributions will be pivotal in enhancing our application’s capabilities. If you are passionate about leveraging computer vision technologies to solve real-world problems, we want to hear from you! Please provide examples of your previous work in this field along with your application.
Skills: Python, Computer Vision, OpenCV, C++
Fixed budget:
1,000 USD
9 hours ago
|
|||||
Develop an Offline Website for Text Extraction from Images using OCR Technology
|
15 USD | 9 hours ago |
Client Rank
- Risky
1 open job
India
|
||
Required Connects: 9
Project Description:
I am looking for a skilled developer to create an offline text extraction website that uses OCR (Optical Character Recognition) technology. The website should allow users to upload image files (JPG, PNG) and extract text from these images without requiring an internet connection. The entire system should work offline, with no reliance on external APIs or cloud services. Key Features & Requirements: 1. Offline Functionality: o The entire website, including the OCR processing, must work offline. No internet connection should be needed to upload images or extract text. o OCR should be performed locally using Tesseract OCR or an equivalent offline OCR engine. 2. User Interface (Frontend): o A simple, intuitive interface that allows users to:  Upload images in JPG or PNG format.  View the extracted text after processing.  copy or download the extracted text. o Provide progress indicators (e.g., loading spinner) while the OCR is running. 3. OCR Integration (Backend): o Use Tesseract OCR (or similar open-source OCR engine) for extracting text from images. o Implement basic image preprocessing (e.g., resizing, converting to grayscale) to improve OCR accuracy. 4. File Upload & Image Processing: o Users should be able to upload images via a file input form. o The backend will handle image processing, including OCR and text extraction. 5. Text Output: o Display the extracted text cleanly on the webpage. o Allow users to copy the text or save it as a text file. 6. User Registration & Login System: • Register Page: Users should be able to register an account with basic details such as username, email, and password. • Login Page: Registered users can log in with their credentials (username/email and password). • Session Management: Implement session-based authentication to allow users to remain logged in until they log out. 7. User History Page: • After logging in, users should be able to access a History Page. • On this page, users can see a list of their previously uploaded images and the corresponding extracted text. 8. Security & Data Handling: o Ensure that the uploaded images are processed securely and that sensitive data is handled with care (basic security measures for file uploads and size limits). 9. Database Integration: • Store uploaded images and the extracted text in a local database (e.g., SQLite or MySQL). • The database will keep a record of the image files along with their corresponding extracted text for future reference and retrieval. 10. Documentation: o Provide clear setup instructions for running the website offline, including installing Tesseract OCR and setting up the local server. o Document the entire development process with comments in the code, explaining the setup and functioning of the website. Skills Required: • OCR Integration: Experience with Tesseract OCR or other offline OCR tools. • Web Development: Proficiency in HTML, CSS, JavaScript for frontend, and Node.js, Flask, or Django for backend. • Image Processing: Experience with image manipulation using libraries like OpenCV or Pillow. • Offline Deployment: Knowledge of deploying and running web applications locally without internet access. Timeline & Budget: • Please provide an estimated timeline for completion and an hourly or project-based rate. To Apply: Please include: 1. A brief description of your experience with OCR projects. 2. Examples of similar offline projects you've worked on. 3. Your proposed timeline and budget for the project.
Skills: JavaScript, HTML, CSS, Tesseract OCR, Python, API Development, Database (SQL)
Fixed budget:
15 USD
9 hours ago
|
|||||
Develop an Offline Website for Text Extraction from Images using OCR Technology
|
15 USD | 9 hours ago |
Client Rank
- Risky
1 open job
India
|
||
Required Connects: 7
Project Description:
I am looking for a skilled developer to create an offline text extraction website that uses OCR (Optical Character Recognition) technology. The website should allow users to upload image files (JPG, PNG) and extract text from these images without requiring an internet connection. The entire system should work offline, with no reliance on external APIs or cloud services. Key Features & Requirements: 1. Offline Functionality: o The entire website, including the OCR processing, must work offline. No internet connection should be needed to upload images or extract text. o OCR should be performed locally using Tesseract OCR or an equivalent offline OCR engine. 2. User Interface (Frontend): o A simple, intuitive interface that allows users to:  Upload images in JPG or PNG format.  View the extracted text after processing.  copy or download the extracted text. o Provide progress indicators (e.g., loading spinner) while the OCR is running. 3. OCR Integration (Backend): o Use Tesseract OCR (or similar open-source OCR engine) for extracting text from images. o Implement basic image preprocessing (e.g., resizing, converting to grayscale) to improve OCR accuracy. 4. File Upload & Image Processing: o Users should be able to upload images via a file input form. o The backend will handle image processing, including OCR and text extraction. 5. Text Output: o Display the extracted text cleanly on the webpage. o Allow users to copy the text or save it as a text file. 6. User Registration & Login System: • Register Page: Users should be able to register an account with basic details such as username, email, and password. • Login Page: Registered users can log in with their credentials (username/email and password). • Session Management: Implement session-based authentication to allow users to remain logged in until they log out. 7. User History Page: • After logging in, users should be able to access a History Page. • On this page, users can see a list of their previously uploaded images and the corresponding extracted text. 8. Security & Data Handling: o Ensure that the uploaded images are processed securely and that sensitive data is handled with care (basic security measures for file uploads and size limits). 9. Database Integration: • Store uploaded images and the extracted text in a local database (e.g., SQLite or MySQL). • The database will keep a record of the image files along with their corresponding extracted text for future reference and retrieval. 10. Documentation: o Provide clear setup instructions for running the website offline, including installing Tesseract OCR and setting up the local server. o Document the entire development process with comments in the code, explaining the setup and functioning of the website. Skills Required: • OCR Integration: Experience with Tesseract OCR or other offline OCR tools. • Web Development: Proficiency in HTML, CSS, JavaScript for frontend, and Node.js, Flask, or Django for backend. • Image Processing: Experience with image manipulation using libraries like OpenCV or Pillow. • Offline Deployment: Knowledge of deploying and running web applications locally without internet access. Timeline & Budget: • Please provide an estimated timeline for completion and an hourly or project-based rate. To Apply: Please include: 1. A brief description of your experience with OCR projects. 2. Examples of similar offline projects you've worked on. 3. Your proposed timeline and budget for the project.
Skills: JavaScript, HTML, CSS, Tesseract OCR, Python, API Development, Database (SQL)
Fixed budget:
15 USD
9 hours ago
|
|||||
Cross-Platform Android/iOS SDK Creation for YOLOv7 Object Detection Models
|
~147 - 442 USD | 10 hours ago |
Client Rank
- Excellent
$36'808 total spent
33 hires
, 5 active
1 open job
4.83
of 25 reviews
Registered at: 10/08/2017
India
|
||
I have a Python Web APP integrated with 3 custom-trained models which work like the below--
1. Takes image as input 2. Check image orientation and correct its orientation if it is wrong 3. Pass the correctly oriented image to the first model and check if a given image is valid or invalid 4. A valid image is passed to the second model which locates the region of interest 5. Crop that region of interest from step 4 and pass to the third model which detects readings 6. Show the readings in the correct order I have already converted all three models to TFLite format. My Requirement is to create the cross-platform SDK(android/ios), which fulfills the above steps. So, the developer needs to implement the above-mentioned logic and build the required SDK. He also needs to provide support for installing the given SDK with the Android/iOS app. The given SDK must be built with following the best coding practices. As a screening, you must show me a working SDK with a sample yolov7 tflite model which I can provide. On successfully providing the sample SDK, the developer must sign the NDA and can start working on this project immediately. Ensure you have a very good knowledge of the Opencv library and the SDK maintains compatibility across both Android and iOS platforms. The SDK should support real-time processing of images to detect objects instantly. Skills: Mobile App Development, Android, OpenCV, iOS Development, Object Detection
Fixed budget:
12,500 - 37,500 INR
10 hours ago
|
|||||
Cross-Platform Android/iOS SDK Creation for YOLOv7 Object Detection Models
|
~147 - 442 USD | 10 hours ago |
Client Rank
- Excellent
$36'808 total spent
33 hires
, 5 active
1 open job
4.83
of 25 reviews
Registered at: 10/08/2017
India
|
||
I have a Python Web APP integrated with 3 custom-trained models which work like the below--
1. Takes image as input 2. Check image orientation and correct its orientation if it is wrong 3. Pass the correctly oriented image to the first model and check if a given image is valid or invalid 4. A valid image is passed to the second model which locates the region of interest 5. Crop that region of interest from step 4 and pass to the third model which detects readings 6. Show the readings in the correct order I have already converted all three models to TFLite format. My Requirement is to create the cross-platform SDK(android/ios), which fulfills the above steps. So, the developer needs to implement the above-mentioned logic and build the required SDK. He also needs to provide support for installing the given SDK with the Android/iOS app. The given SDK must be built with following the best coding practices. As a screening, you must show me a working SDK with a sample yolov7 tflite model which I can provide. On successfully providing the sample SDK, the developer must sign the NDA and can start working on this project immediately. Ensure you have a very good knowledge of the Opencv library and the SDK maintains compatibility across both Android and iOS platforms. The SDK should support real-time processing of images to detect objects instantly. Skills: Mobile App Development, Android, OpenCV, iOS Development, Object Detection
Fixed budget:
12,500 - 37,500 INR
10 hours ago
|
|||||
Cross-Platform Android/iOS SDK Creation for YOLOv7 Object Detection Models
|
~147 - 442 USD | 10 hours ago |
Client Rank
- Excellent
$36'808 total spent
33 hires
, 5 active
1 open job
4.83
of 25 reviews
Registered at: 10/08/2017
India
|
||
I have a Python Web APP integrated with 3 custom-trained models which work like the below--
1. Takes image as input 2. Check image orientation and correct its orientation if it is wrong 3. Pass the correctly oriented image to the first model and check if a given image is valid or invalid 4. A valid image is passed to the second model which locates the region of interest 5. Crop that region of interest from step 4 and pass to the third model which detects readings 6. Show the readings in the correct order I have already converted all three models to TFLite format. My Requirement is to create the cross-platform SDK(android/ios), which fulfills the above steps. So, the developer needs to implement the above-mentioned logic and build the required SDK. He also needs to provide support for installing the given SDK with the Android/iOS app. The given SDK must be built with following the best coding practices. As a screening, you must show me a working SDK with a sample yolov7 tflite model which I can provide. On successfully providing the sample SDK, the developer must sign the NDA and can start working on this project immediately. Ensure you have a very good knowledge of the Opencv library and the SDK maintains compatibility across both Android and iOS platforms. The SDK should support real-time processing of images to detect objects instantly. Skills: Mobile App Development, Android, OpenCV, iOS Development, Object Detection
Fixed budget:
12,500 - 37,500 INR
10 hours ago
|
|||||
Cross-Platform Android/iOS SDK Creation for YOLOv7 Object Detection Models
|
~147 - 442 USD | 10 hours ago |
Client Rank
- Excellent
$36'808 total spent
33 hires
, 5 active
1 open job
4.83
of 25 reviews
Registered at: 10/08/2017
India
|
||
I have a Python Web APP integrated with 3 custom-trained models which work like the below--
1. Takes image as input 2. Check image orientation and correct its orientation if it is wrong 3. Pass the correctly oriented image to the first model and check if a given image is valid or invalid 4. A valid image is passed to the second model which locates the region of interest 5. Crop that region of interest from step 4 and pass to the third model which detects readings 6. Show the readings in the correct order I have already converted all three models to TFLite format. My Requirement is to create the cross-platform SDK(android/ios), which fulfills the above steps. So, the developer needs to implement the above-mentioned logic and build the required SDK. He also needs to provide support for installing the given SDK with the Android/iOS app. The given SDK must be built with following the best coding practices. As a screening, you must show me a working SDK with a sample yolov7 tflite model which I can provide. On successfully providing the sample SDK, the developer must sign the NDA and can start working on this project immediately. Ensure you have a very good knowledge of the Opencv library and the SDK maintains compatibility across both Android and iOS platforms. The SDK should support real-time processing of images to detect objects instantly. Skills: Mobile App Development, Android, OpenCV, iOS Development, Object Detection
Fixed budget:
12,500 - 37,500 INR
10 hours ago
|
|||||
Insurance and Estate Planning
|
not specified | 13 hours ago |
Client Rank
- Risky
1 open job
United States
|
||
Required Connects: 7
High-end/wealth look with conversions for clients and prospective agents that is engaging, prompting and able to schedule. Will have events and other seminars. My company name is Echo Legacy Financial Group. This is my first site. I have a Wix account, but would like a platform that could be better optimized. I’m experienced with development and Seo. Will have a bunch of questions :-) Do you schedule a preliminary call to get a feel for the job?
Skills: Python, TensorFlow, PyTorch, Keras, OpenCV, YOLO, spaCy, NLTK, LLaMA, Hugging Face, OpenAI API, LangChain, Docker, Amazon Web Services, TensorRT
Budget:
not specified
13 hours ago
|
|||||
Computer Vision Software Engineer
|
not specified | 1 day ago |
Client Rank
- Risky
1 open job
Registered at: 26/11/2024
United States
|
||
Required Connects: 7
I am looking for a computer vision engineer who understands diffusion models, image classification, and augmented reality. Needs to be able to work with Android.
Skills: Machine Learning, Computer Vision, Python, MATLAB, SQL, C++, Deep Learning, PyTorch, OpenCV, Image Processing, Data Mining, Research Paper Writing, Jupyter Notebook, AWS Development, Digital Signal Processing
Budget:
not specified
1 day ago
|
|||||
Zoom AI Eye-Contact Correcting App
|
not specified | 1 day ago |
Client Rank
- Excellent
$28'741 total spent
6 hires
, 3 active
7 jobs posted
86% hire rate,
1 open job
45.91 /hr avg hourly rate paid
169 hours
5.00
of 3 reviews
Registered at: 13/09/2021
United States
|
||
Required Connects: 19
Zoom AI Eye-Contact Correcting App
1. Project Overview • Objective: Develop an AI-powered app for Zoom that corrects eye contact by adjusting the user’s video stream in real-time. • Platform: Desktop application or Zoom-integrated app via the Zoom App Marketplace. • Core Features: 1. Real-time face and eye tracking. 2. AI-based video adjustment for natural eye contact. 3. Seamless integration with Zoom or functionality as a virtual camera. 4. User-friendly interface with toggles and calibration settings. ________________________________________ 2. Technology Stack • Frontend: o Framework: React, Electron.js (for desktop app). o Tools: HTML, CSS, JavaScript. • Backend: o Language: Python (for AI processing), Node.js (for server logic). o Frameworks: Flask/Django (Python) or Express.js (Node.js). • AI/Computer Vision: o Libraries: OpenCV, MediaPipe, Dlib. o AI Models: Pre-trained models for face and eye tracking (e.g., AffectNet, GazeCapture). • Real-Time Video Processing: o Technology: WebRTC for video streaming. o Tools: FFmpeg for video manipulation (if required). • Zoom Integration: o SDKs: Zoom Video SDK, Zoom Meeting SDK. o API: Zoom’s REST APIs for app integration and user authentication. • Database: o Firebase or MongoDB for storing user preferences (optional). • DevOps: o CI/CD Tools: GitHub Actions, Jenkins. o Cloud Hosting: AWS, Azure, or Google Cloud for backend services. ________________________________________ 3. Core App Features 1. Real-Time Eye Contact Adjustment: o Detect user’s face and eye position using computer vision. o Adjust video feed to simulate direct eye contact with AI-driven transformations. 2. Seamless Zoom Integration: o Use the Zoom Video SDK to access and manipulate video streams. o Offer options to enable or disable eye-contact correction during meetings. 3. User Interface: o Controls to toggle eye-contact correction. o Calibration options for individual preferences. o Live preview of video feed with corrections applied. 4. Low-Latency Video Processing: o Optimize video processing to ensure minimal lag during live meetings. 5. Privacy Protection: o Local processing of video streams to ensure user privacy. o No storage of user video or data unless explicitly authorized. ________________________________________ 4. Development Timeline • Phase 1: Research & Prototyping (2 weeks) o Evaluate available AI models for face and eye tracking. o Create a proof of concept for real-time video adjustment. • Phase 2: Backend Development (4 weeks) o Set up the Zoom Video SDK and APIs. o Implement video stream processing logic. • Phase 3: Frontend Development (3 weeks) o Build the user interface with controls for eye-contact correction. o Integrate video preview and toggles. • Phase 4: Testing & Optimization (2 weeks) o Test for performance, latency, and accuracy. o Optimize for different hardware configurations. • Phase 5: Deployment (1 week) o Package the app for Zoom App Marketplace and/or as a standalone virtual camera. ________________________________________ 5. Deliverables • Fully functional app integrated with Zoom or operating as a virtual camera. • User-friendly interface with customization options. • Documentation: o User guide. o Developer documentation for future enhancements. • Deployment assistance to the Zoom App Marketplace (if applicable). ________________________________________ Upwork Project Description Title: AI-Powered Eye Contact Correcting App for Zoom Description: We’re looking for a talented developer or team to build an innovative AI-powered app for Zoom that corrects eye contact in real-time. This app will leverage advanced computer vision to adjust video streams and simulate natural eye contact, even when the user is looking at different parts of the screen. Key Features: • Real-time face and eye tracking using AI. • Integration with Zoom via the Zoom SDK or functionality as a virtual camera. • User-friendly interface for toggles and customization. • Optimized video processing for low latency. • Privacy-focused design with local video processing. Technical Requirements: • Experience with Zoom SDKs (Zoom Video SDK or Meeting SDK). • Proficiency in AI/Computer Vision tools (OpenCV, MediaPipe, Dlib). • Expertise in Python (for AI) and JavaScript/Node.js (for frontend/backend development). • Familiarity with WebRTC for real-time video manipulation. • Strong understanding of UI/UX design for seamless user experiences. Deliverables: • Fully functional desktop application or Zoom-integrated app. • Clear user and developer documentation. • Assistance with deployment to the Zoom App Marketplace or packaging as a standalone virtual camera. Budget: $[Your Budget Range, e.g., $5,000-$10,000] (open to discussion based on expertise). Timeline: [Insert your preferred timeline, e.g., 10-12 weeks]. If you’re experienced in developing cutting-edge AI applications and want to be part of a groundbreaking project, we’d love to hear from you! Submit Your Proposal Today and join us in transforming video communication.
Skills: JavaScript, HTML, CSS, Zoom Video Conferencing
Budget:
not specified
1 day ago
|
|||||
AI Behavioural Safety & Compliance System
|
250 - 750 USD | 1 day ago |
Client Rank
- Risky
1 open job
Registered at: 27/12/2022
Malaysia
|
||
I would like to create a prototype. Here’s a draft project brief
--- Project Brief: AI-Driven Behavioral Safety and Compliance System 1. Project Overview We aim to develop an AI-driven safety and compliance system tailored for industries like construction and manufacturing. The solution should identify workplace hazards, detect unsafe behaviors in real time, and ensure compliance with safety regulations through transparent monitoring and record-keeping. 2. Objectives Reduce workplace accidents by identifying and addressing unsafe behaviors. Improve employer compliance with safety regulations. Provide real-time monitoring and actionable insights for risk control. 3. Core Features A. AI-Powered Safety Monitoring Object detection for PPE usage (e.g., helmets, gloves). Behavior analysis to flag unsafe actions (e.g., improper use of equipment). B. IoT Integration Use IoT sensors to monitor: Air quality, temperature, noise levels, or vibrations. Worker location and movement in high-risk zones. C. Compliance Tracking Blockchain integration for transparent and tamper-proof safety records. Automated reporting for audits and regulatory requirements. D. Risk Control Management Solutions for hazard elimination, engineering controls, substitution, administrative controls, and PPE management. E. User-Friendly Dashboard Centralized platform for viewing real-time data and historical trends. Notifications and alerts for immediate action on detected risks. --- 4. Target Users Industries: Construction and manufacturing. Stakeholders: HSE officers, managers, and compliance teams. --- 5. Deliverables A functional prototype of the system with AI and IoT capabilities. Integration with blockchain for compliance tracking. Mobile and web dashboard for monitoring and reporting. --- 6. Timeline and Budget Expected Timeline: [Insert duration, e.g., 6 months]. --- 7. Technical Requirements AI Frameworks Preferred: TensorFlow, PyTorch, or OpenCV. Focus: Object detection, behavior analysis, and predictive modeling. IoT Integration Sensors for environmental monitoring (temperature, air quality, etc.). Location tracking (e.g., GPS, RFID). Blockchain Technology: Ethereum, Hyperledger, or similar frameworks. Use Case: Storing safety data for audits and compliance. --- 8. Partner Expectations Provide clear milestones and deliverables. Expertise in AI, IoT, and blockchain technologies. Support during testing and deployment phases. Ongoing maintenance and feature upgrades (optional). Incorporate advanced AI behavior analysis including predictive modeling for potential risks. Prioritize real-time detection of unsafe behaviors. The prototype should include the full feature set detailed in the project brief. The dashboard should support both web and mobile platforms. Develop customized AI models for behavior analysis, tailored to specific industry needs and safety regulations. Prioritize AI-powered safety monitoring with object detection for PPE usage and behavior analysis to flag unsafe actions. The AI models should be highly customized to meet industry-specific safety regulations and requirements. The initial prototype should include the full feature set described in the project brief. The AI models should be highly customized to meet specific industry needs and safety regulations. The prototype should be deployable on cloud platforms to ensure scalability and remote access. The system should incorporate advanced AI behavior analysis with predictive modeling capabilities for identifying potential risks. The system should comply with OSHA guidelines. The system should be deployable on cloud platforms for scalability and remote access. The system should provide real-time notifications to users for immediate action on detected risks. The prototype should be deployable on cloud platforms for scalability and remote access. The AI models should be highly customized to meet specific industry needs and safety regulations. The AI should prioritize detecting compliance issues first. The prototype will initially be tested in the construction industry. Skills: Mobile App Development, iPhone, Android, iPad
Fixed budget:
250 - 750 USD
1 day ago
|
|||||
AI Behavioural Safety & Compliance System
|
250 - 750 USD | 1 day ago |
Client Rank
- Risky
1 open job
Registered at: 27/12/2022
Malaysia
|
||
I would like to create a prototype. Here’s a draft project brief
--- Project Brief: AI-Driven Behavioral Safety and Compliance System 1. Project Overview We aim to develop an AI-driven safety and compliance system tailored for industries like construction and manufacturing. The solution should identify workplace hazards, detect unsafe behaviors in real time, and ensure compliance with safety regulations through transparent monitoring and record-keeping. 2. Objectives Reduce workplace accidents by identifying and addressing unsafe behaviors. Improve employer compliance with safety regulations. Provide real-time monitoring and actionable insights for risk control. 3. Core Features A. AI-Powered Safety Monitoring Object detection for PPE usage (e.g., helmets, gloves). Behavior analysis to flag unsafe actions (e.g., improper use of equipment). B. IoT Integration Use IoT sensors to monitor: Air quality, temperature, noise levels, or vibrations. Worker location and movement in high-risk zones. C. Compliance Tracking Blockchain integration for transparent and tamper-proof safety records. Automated reporting for audits and regulatory requirements. D. Risk Control Management Solutions for hazard elimination, engineering controls, substitution, administrative controls, and PPE management. E. User-Friendly Dashboard Centralized platform for viewing real-time data and historical trends. Notifications and alerts for immediate action on detected risks. --- 4. Target Users Industries: Construction and manufacturing. Stakeholders: HSE officers, managers, and compliance teams. --- 5. Deliverables A functional prototype of the system with AI and IoT capabilities. Integration with blockchain for compliance tracking. Mobile and web dashboard for monitoring and reporting. --- 6. Timeline and Budget Expected Timeline: [Insert duration, e.g., 6 months]. --- 7. Technical Requirements AI Frameworks Preferred: TensorFlow, PyTorch, or OpenCV. Focus: Object detection, behavior analysis, and predictive modeling. IoT Integration Sensors for environmental monitoring (temperature, air quality, etc.). Location tracking (e.g., GPS, RFID). Blockchain Technology: Ethereum, Hyperledger, or similar frameworks. Use Case: Storing safety data for audits and compliance. --- 8. Partner Expectations Provide clear milestones and deliverables. Expertise in AI, IoT, and blockchain technologies. Support during testing and deployment phases. Ongoing maintenance and feature upgrades (optional). Incorporate advanced AI behavior analysis including predictive modeling for potential risks. Prioritize real-time detection of unsafe behaviors. The prototype should include the full feature set detailed in the project brief. The dashboard should support both web and mobile platforms. Develop customized AI models for behavior analysis, tailored to specific industry needs and safety regulations. Prioritize AI-powered safety monitoring with object detection for PPE usage and behavior analysis to flag unsafe actions. The AI models should be highly customized to meet industry-specific safety regulations and requirements. The initial prototype should include the full feature set described in the project brief. The AI models should be highly customized to meet specific industry needs and safety regulations. The prototype should be deployable on cloud platforms to ensure scalability and remote access. The system should incorporate advanced AI behavior analysis with predictive modeling capabilities for identifying potential risks. The system should comply with OSHA guidelines. The system should be deployable on cloud platforms for scalability and remote access. The system should provide real-time notifications to users for immediate action on detected risks. The prototype should be deployable on cloud platforms for scalability and remote access. The AI models should be highly customized to meet specific industry needs and safety regulations. The AI should prioritize detecting compliance issues first. The prototype will initially be tested in the construction industry. Skills: Mobile App Development, iPhone, Android, iPad
Fixed budget:
250 - 750 USD
1 day ago
|
|||||
AI Behavioural Safety & Compliance System
|
250 - 750 USD | 1 day ago |
Client Rank
- Risky
1 open job
Registered at: 27/12/2022
Malaysia
|
||
I would like to create a prototype. Here’s a draft project brief
--- Project Brief: AI-Driven Behavioral Safety and Compliance System 1. Project Overview We aim to develop an AI-driven safety and compliance system tailored for industries like construction and manufacturing. The solution should identify workplace hazards, detect unsafe behaviors in real time, and ensure compliance with safety regulations through transparent monitoring and record-keeping. 2. Objectives Reduce workplace accidents by identifying and addressing unsafe behaviors. Improve employer compliance with safety regulations. Provide real-time monitoring and actionable insights for risk control. 3. Core Features A. AI-Powered Safety Monitoring Object detection for PPE usage (e.g., helmets, gloves). Behavior analysis to flag unsafe actions (e.g., improper use of equipment). B. IoT Integration Use IoT sensors to monitor: Air quality, temperature, noise levels, or vibrations. Worker location and movement in high-risk zones. C. Compliance Tracking Blockchain integration for transparent and tamper-proof safety records. Automated reporting for audits and regulatory requirements. D. Risk Control Management Solutions for hazard elimination, engineering controls, substitution, administrative controls, and PPE management. E. User-Friendly Dashboard Centralized platform for viewing real-time data and historical trends. Notifications and alerts for immediate action on detected risks. --- 4. Target Users Industries: Construction and manufacturing. Stakeholders: HSE officers, managers, and compliance teams. --- 5. Deliverables A functional prototype of the system with AI and IoT capabilities. Integration with blockchain for compliance tracking. Mobile and web dashboard for monitoring and reporting. --- 6. Timeline and Budget Expected Timeline: [Insert duration, e.g., 6 months]. --- 7. Technical Requirements AI Frameworks Preferred: TensorFlow, PyTorch, or OpenCV. Focus: Object detection, behavior analysis, and predictive modeling. IoT Integration Sensors for environmental monitoring (temperature, air quality, etc.). Location tracking (e.g., GPS, RFID). Blockchain Technology: Ethereum, Hyperledger, or similar frameworks. Use Case: Storing safety data for audits and compliance. --- 8. Partner Expectations Provide clear milestones and deliverables. Expertise in AI, IoT, and blockchain technologies. Support during testing and deployment phases. Ongoing maintenance and feature upgrades (optional). Incorporate advanced AI behavior analysis including predictive modeling for potential risks. Prioritize real-time detection of unsafe behaviors. The prototype should include the full feature set detailed in the project brief. The dashboard should support both web and mobile platforms. Develop customized AI models for behavior analysis, tailored to specific industry needs and safety regulations. Prioritize AI-powered safety monitoring with object detection for PPE usage and behavior analysis to flag unsafe actions. The AI models should be highly customized to meet industry-specific safety regulations and requirements. The initial prototype should include the full feature set described in the project brief. The AI models should be highly customized to meet specific industry needs and safety regulations. The prototype should be deployable on cloud platforms to ensure scalability and remote access. The system should incorporate advanced AI behavior analysis with predictive modeling capabilities for identifying potential risks. The system should comply with OSHA guidelines. The system should be deployable on cloud platforms for scalability and remote access. The system should provide real-time notifications to users for immediate action on detected risks. The prototype should be deployable on cloud platforms for scalability and remote access. The AI models should be highly customized to meet specific industry needs and safety regulations. The AI should prioritize detecting compliance issues first. The prototype will initially be tested in the construction industry. Skills: Mobile App Development, iPhone, Android, iPad
Fixed budget:
250 - 750 USD
1 day ago
|
|||||
AI Behavioural Safety & Compliance System
|
250 - 750 USD | 1 day ago |
Client Rank
- Risky
1 open job
Registered at: 27/12/2022
Malaysia
|
||
I would like to create a prototype. Here’s a draft project brief
--- Project Brief: AI-Driven Behavioral Safety and Compliance System 1. Project Overview We aim to develop an AI-driven safety and compliance system tailored for industries like construction and manufacturing. The solution should identify workplace hazards, detect unsafe behaviors in real time, and ensure compliance with safety regulations through transparent monitoring and record-keeping. 2. Objectives Reduce workplace accidents by identifying and addressing unsafe behaviors. Improve employer compliance with safety regulations. Provide real-time monitoring and actionable insights for risk control. 3. Core Features A. AI-Powered Safety Monitoring Object detection for PPE usage (e.g., helmets, gloves). Behavior analysis to flag unsafe actions (e.g., improper use of equipment). B. IoT Integration Use IoT sensors to monitor: Air quality, temperature, noise levels, or vibrations. Worker location and movement in high-risk zones. C. Compliance Tracking Blockchain integration for transparent and tamper-proof safety records. Automated reporting for audits and regulatory requirements. D. Risk Control Management Solutions for hazard elimination, engineering controls, substitution, administrative controls, and PPE management. E. User-Friendly Dashboard Centralized platform for viewing real-time data and historical trends. Notifications and alerts for immediate action on detected risks. --- 4. Target Users Industries: Construction and manufacturing. Stakeholders: HSE officers, managers, and compliance teams. --- 5. Deliverables A functional prototype of the system with AI and IoT capabilities. Integration with blockchain for compliance tracking. Mobile and web dashboard for monitoring and reporting. --- 6. Timeline and Budget Expected Timeline: [Insert duration, e.g., 6 months]. --- 7. Technical Requirements AI Frameworks Preferred: TensorFlow, PyTorch, or OpenCV. Focus: Object detection, behavior analysis, and predictive modeling. IoT Integration Sensors for environmental monitoring (temperature, air quality, etc.). Location tracking (e.g., GPS, RFID). Blockchain Technology: Ethereum, Hyperledger, or similar frameworks. Use Case: Storing safety data for audits and compliance. --- 8. Partner Expectations Provide clear milestones and deliverables. Expertise in AI, IoT, and blockchain technologies. Support during testing and deployment phases. Ongoing maintenance and feature upgrades (optional). Incorporate advanced AI behavior analysis including predictive modeling for potential risks. Prioritize real-time detection of unsafe behaviors. The prototype should include the full feature set detailed in the project brief. The dashboard should support both web and mobile platforms. Develop customized AI models for behavior analysis, tailored to specific industry needs and safety regulations. Prioritize AI-powered safety monitoring with object detection for PPE usage and behavior analysis to flag unsafe actions. The AI models should be highly customized to meet industry-specific safety regulations and requirements. The initial prototype should include the full feature set described in the project brief. The AI models should be highly customized to meet specific industry needs and safety regulations. The prototype should be deployable on cloud platforms to ensure scalability and remote access. The system should incorporate advanced AI behavior analysis with predictive modeling capabilities for identifying potential risks. The system should comply with OSHA guidelines. The system should be deployable on cloud platforms for scalability and remote access. The system should provide real-time notifications to users for immediate action on detected risks. The prototype should be deployable on cloud platforms for scalability and remote access. The AI models should be highly customized to meet specific industry needs and safety regulations. The AI should prioritize detecting compliance issues first. The prototype will initially be tested in the construction industry. Skills: Mobile App Development, iPhone, Android, iPad
Fixed budget:
250 - 750 USD
1 day ago
|
|||||
Window Dimensions API OpenCV Python
|
10 - 15 USD
/ hr
|
1 day ago |
Client Rank
- Medium
$323 total spent
6 hires
, 2 active
29 jobs posted
21% hire rate,
25 open job
5.00
of 2 reviews
Registered at: 13/08/2024
Pakistan
|
||
Required Connects: 11
An API is to be provided that receives an image file of a window section taken for a construction project. The image comes from the camera of a mobile device (smartphone/tablet) and meets the following requirements:
The window or the window hole (building/construction) is photographed straight from the front. The window is rectangular or square. The image is well-lit. A reference object (e.g., a green cube) with known dimensions can be placed in the image. The API detects the object (window or window hole) and generates the outer dimensions. The tolerance should be as low as possible (max 1.0 cm). The API returns a JSON with the following properties: width: Width of the window hole height: Height of the window hole imagepath: Image path to the picture with the frame and dimensions, so that it can be checked if the section was captured correctly Only the API needs to be programmed. The image illustrates Step 2 and Step 3. - I need the hardware/hosting requirements for the project to run this on the customers environment. - It is possible that a different reference object is used for the project (e.g. square magnetic board or standee).
Skills: Python, OpenCV, API, JSON
Hourly rate:
10 - 15 USD
1 day ago
|
|||||
C++ Developer with OpenCV and Tesseract Experience
|
100 USD | 1 day ago |
Client Rank
- Medium
$635 total spent
4 hires
, 5 active
6 jobs posted
67% hire rate,
3 open job
3.00 /hr avg hourly rate paid
303 hours
Registered at: 22/03/2023
Thailand
|
||
Required Connects: 13
# C++ Developer Needed - Clash of Clans Base Analysis Tool
Need a C++ developer to create a tool that analyzes Clash of Clans base layouts and finds valid troop deployment points. I currently have a working Python script that does this, but I'm looking to rebuild it in C++ for better performance. ## Project Requirements: - Build a C++ application using OpenCV and Tesseract OCR - Analyze screenshots of Clash of Clans bases to find valid deployment points - Must work across all CoC terrains and base layouts - Create separate functions to analyze deployment points for each side of the base - Add visual feedback (boxes drawn on the image showing deployment zones) ## Technical Skills Required: - C++ programming - Experience with OpenCV - Familiarity with Tesseract OCR - Image processing knowledge I can provide my current Python script as a reference for the logic. The new C++ version should be more efficient and accurate at finding deployment zones from base screenshots. Contact me if you're interested and have experience with similar computer vision projects.
Skills: C++, OpenCV, C#, C, Desktop Application
Fixed budget:
100 USD
1 day ago
|
|||||
AI Researcher - Skunkworks Department (Multi-Agent Systems, AI, and Blockchain)
|
50 - 100 USD
/ hr
|
1 day ago |
Client Rank
- Excellent
138 jobs posted
100% hire rate,
1 open job
4.58
of 552 reviews
Registered at: 03/12/2015
United States
|
||
Featured
Required Connects: 19
Are you ready to shape the future of AI at the intersection of multi-agent systems, crypto technologies, and art? Hammer, a 10+ year old, bootstrapped, and profitable AI software company, is seeking an innovative and driven AI researcher to join our skunkworks team.
As part of this role, you’ll collaborate directly with our CEO to conceptualize, prototype, and experiment with groundbreaking ideas. We are looking for a visionary who thrives on exploring uncharted territory in AI and its applications. What You’ll Do: Design, implement, and experiment with cutting-edge AI models and multi-agent systems. Research and prototype decentralized autonomous systems, including autonomous chatbots and agents. Integrate AI with blockchain technologies, exploring use cases such as Trusted Execution Environments (TEEs). Collaborate on creative projects at the intersection of AI and art, including generative AI for visual and audio media. Build and refine experimental software using emerging tools and frameworks. What We’re Looking For: Expertise in AI Development: Proficiency in machine learning frameworks such as PyTorch, TensorFlow, or Flux.jl. Experience with generative AI models, including Stable Diffusion or similar diffusion models. Familiarity with tools like ComfyUI, Runway, or chaiNNer for AI pipeline orchestration. Autonomous Agents & Chatbots: Strong understanding of multi-agent systems, reinforcement learning, and agent-based modeling. Experience with frameworks like LangChain, OpenAI APIs, or Auto-GPT. Interest in designing decentralized, autonomous chatbots. Blockchain Knowledge: Experience with blockchain technologies such as Ethereum, Solidity, or Polkadot. Knowledge of trusted execution environments (TEEs) and their applications in secure AI deployments. Familiarity with smart contract development, decentralized identities, or DAOs. Programming Proficiency: Strong coding skills in Python, Julia, or Rust. Familiarity with web3 frameworks (e.g., Web3.js, ethers.js) is a plus. Creative Edge: Passion for exploring the intersection of AI and art, including generative models for images, videos, and beyond. Bonus Skills: Experience with AI middleware tools like Weights & Biases or MLFlow for model tracking and experimentation. Familiarity with TEE tools such as Intel SGX or AWS Nitro Enclaves. Knowledge of image processing libraries (e.g., OpenCV, PIL). Background in cryptographic techniques or zero-knowledge proofs.
Skills: Artificial Intelligence, Python, Machine Learning, Artificial Neural Network, Critical Thinking Skills
Hourly rate:
50 - 100 USD
1 day ago
|
|||||
Generative AI, Computer Vision
|
30 - 60 USD
/ hr
|
1 day ago |
Client Rank
- Excellent
$23'243 total spent
51 hires
, 32 active
28 jobs posted
100% hire rate,
1 open job
17.04 /hr avg hourly rate paid
1139 hours
4.82
of 34 reviews
Registered at: 19/11/2022
Switzerland
|
||
Required Connects: 13
I am looking for a skilled computer vision expert to develop a solution that integrates a realistic person's face into a given example image (e.g., a cartoon princess dress). The goal is to create two high-quality, visually consistent versions of the example image with the realistic face seamlessly incorporated.
Key Responsibilities: Design and implement a solution that uses input prompts or example images as templates. Accurately map and blend realistic facial features into predefined templates while maintaining style consistency. Generate two visually similar variations of the original example image. Requirements: Expertise in computer vision, image processing, and machine learning. Proficiency in tools and frameworks such as OpenCV, TensorFlow, PyTorch, or similar. Experience with face detection, blending, and style adaptation techniques. Familiarity with APIs or tools for image editing, such as Photoshop API, is a plus. A strong portfolio demonstrating similar work or relevant projects.
Skills: Computer Vision, Artificial Intelligence, Neural Network, Deep Learning, PyTorch, TensorFlow, AI Image Generation
Hourly rate:
30 - 60 USD
1 day ago
|
|||||
IA modele for psycologist
|
not specified | 1 day ago |
Client Rank
- Risky
1 open job
France
|
||
Required Connects: 7
I would like to build a modele to simulate discussion with psychologist
I have audio recors of talk between psychologist and patient
Skills: pandas, Computer Vision, TensorFlow, NumPy, PyTorch, Python Scikit-Learn, Python, Machine Learning, OpenCV, 3D Rendering, Retrieval Augmented Generation, Chatbot, AI Agent Development, OpenAI API, Generative AI
Budget:
not specified
1 day ago
|
|||||
Computer vision/object detection
|
13 - 17 USD
/ hr
|
1 day ago |
Client Rank
- Medium
8 jobs posted
25% hire rate,
2 open job
Registered at: 22/11/2024
Sweden
|
||
Required Connects: 14
I am looking for help with a computer vision project. It is mostly available on a GitHub page, but I need some modifications and additions to meet my needs.
The application is based on "Multiply" from GitHub. It currently works with sequences involving 2+ actors, but I need it to work with sequences involving a single actor as well as multiple actors. I would prefer the application to work with JPG sequences only. FFmpeg integration is not needed. The preprocessing stage must include the ability to export camera and SMPL-X data to FBX format and also to import camera and SMPL-X data from FBX. This is a must-have, as opposed to being limited to forced trace input → training. We can use the example data available on GitHub to start, and I can also provide data for single-actor sequences. For preprocessing, I want to integrate ViTPose and OpenPose to improve accuracy. FBX import and export should be the preferred method for handling camera and SMPL-X data. I will handle the custom SMPL-X Maya rig and cameras on my side. If the trace camera and SMPL-X data can be exported to me as FBX (from the preprocessing stage), I will set up my system to parse the data into the trainer. The goal is to ensure compatibility with my workflow, allowing seamless export/import of camera and SMPL-X data via FBX while enhancing the accuracy of preprocessing with ViTPose and OpenPose.
Skills: Computer Vision, TensorFlow, Deep Learning, Machine Learning, OpenCV, Python, Neural Network, Artificial Intelligence
Hourly rate:
13 - 17 USD
1 day ago
|
|||||
C++ Developer for Linux Medtech Application [OpenCV, SDL, ImGui]
|
200 USD | 1 day ago |
Client Rank
- Good
$1'553 total spent
11 hires
, 4 active
11 jobs posted
100% hire rate,
1 open job
5.00
of 4 reviews
Registered at: 08/06/2023
Vietnam
|
||
Required Connects: 13
Eye Tracker is a Medtech project aimed to provide reliable diagnostic tools for neurological conditions based on the analysis of the eyes movements.
Primary system is Linux, Ubuntu 22.04 It features OpenCV and a number of external APIs for video capture, SDL for video display, ImGui for user interface, C++ Boost libraries for utility functions. This would be likely the first, largely evaluating assignment among many to come. The scope of work is as follows: 1. You need to implement a basic task on mutithreading - thread-safe switching between camera modes. That is, stop thread that captures frames, stop thread that process them, resize the GUI, restart capturing and processing. 2. Pseudo-animation of GUI - gradually change window dimensions over a timeframe (remember, it's ImGui) 3. Overall review of code and sound advise will increase our chances for permanent cooperation. Let me know if you're interested, we'll provide you a code for evaluating the project and more detailed work specification. PS If possible, we'd like to see whatever C++ code (portfolio- and pet-projects) you have in open access.
Skills: C++, OpenCV, Linux, ImGui, Desktop Application
Fixed budget:
200 USD
1 day ago
|
|||||
Image Recognition App for a 500,000+ Image Database with Mixed Sizes
|
15 - 60 USD
/ hr
|
2 days ago |
Client Rank
- Excellent
$192'717 total spent
387 hires
, 19 active
481 jobs posted
80% hire rate,
1 open job
20.12 /hr avg hourly rate paid
4126 hours
4.90
of 291 reviews
Registered at: 13/10/2012
Switzerland
|
||
Required Connects: 14
Developer Needed for Image Recognition App with Large Database and Mixed Image Sizes
Description: We are seeking an experienced developer to create a mobile app capable of recognizing black-and-white images from a 500-page book, containing 500,000 images in total. Each page features 1,000 images: 997 small images of consistent size (5.5 mm x 11 mm) arranged in a fixed grid layout, and 3 larger images placed at random locations on each page. The app should segment and recognize these images, match them against a database of 500,000 entries, and redirect the user to specific webpages with additional information about the matched image. Key Features: Image Scanning and Segmentation: -Scan a 30 cm x 30 cm page using the device’s camera. -Grid-Based Segmentation for the 997 small images: -Divide the page into a fixed grid to extract the small images. -Random Segmentation for the 3 larger images: -Detect and extract larger images from their random positions. Efficient Image Recognition: -Match both small grid-based images and larger randomly placed images with a database of 500,000 entries. -Leverage the fixed grid layout to improve recognition speed for small images. -Use advanced feature extraction or deep learning to handle random placements of larger images. Webpage Linking: - Retrieve metadata (e.g., URLs) associated with the matched images. - Redirect users to corresponding webpages seamlessly. Database Integration: - Efficiently handle a large database of 500,000 images and metadata. - Optimize storage and retrieval for quick response times. Technical Requirements: Mobile App Development: Expertise in building cross-platform apps (Flutter preferred) or native apps (iOS/Android). Image Recognition: -Feature-based algorithms (e.g., SIFT, ORB) for small images. -Deep learning techniques (e.g., TensorFlow Lite, PyTorch) for larger, randomly placed images. Segmentation Skills: -Fixed grid segmentation for small images. -Object detection or edge detection for larger images. Database Handling: -Design a scalable system to manage 500,000 image records (e.g., Firebase, PostgreSQL, or Elasticsearch). -Ensure fast retrieval (under 1 second) of metadata during recognition. Cloud Hosting: Experience with AWS, Google Cloud, or similar platforms for backend and database hosting. Deliverables: - A fully functional mobile app for Android and iOS. - Backend system for database integration and image matching. - Segmentation and recognition logic to handle both grid-aligned and randomly placed images. - Full documentation, including code and deployment instructions. Challenges to Address: -Dual Segmentation: Handling both grid-based small images and larger randomly placed images. -Recognition Speed: Ensuring fast and accurate matching for a large database (500,000 images). Additional Document Provided: - 4 Sample Pages: Example 30 cm x 30 cm pages with 1,000 images, illustrating grid alignment and random placements. - The website to which the images will need to connect is 500k.art/view Budget and Timeline: Please provide: - Estimated timeline to complete the project. - Cost proposal (fixed-price or hourly rate) with estimated hours.
Skills: Image Processing, iOS, Android, OpenCV, Computer Vision, OCR Algorithm
Hourly rate:
15 - 60 USD
2 days ago
|
|||||
Alexa like audio input/ output hardware module using esp32
|
not specified | 2 days ago |
Client Rank
- Risky
1 jobs posted
1 open job
Registered at: 07/09/2024
India
|
||
Required Connects: 8
I want to make an esp32-based audio input stream to the server side, and audio output stream from server back to esp32 module, with use case similar to alexa.
Skills: Python, C, Arduino, Raspberry Pi, Embedded Linux, Bluetooth, AWS IoT Core, Azure IoT HuB, OpenCV, Amazon Kinesis Video Streams, Flask, iOS, Android, MQTT, Internet of Things
Budget:
not specified
2 days ago
|
|||||
Car Parts Recognition Engine Development
|
30 - 250 USD | 2 days ago |
Client Rank
- Risky
1 open job
Registered at: 06/12/2020
Netherlands
|
||
I am seeking a skilled engineer or team to create a code capable of recognizing all types of car parts, including engine components, body parts, and interior parts.
We have a app that calls an API on our server, this api needs to call the code that recognize the car part. The code must give back all available details of the recognized car parts. It is possible that recognition found more than 1 record. than it must give back all high score records. Our API build in PHP. Key Features: - Recognition of all car parts - Scanning and identifying QR codes - Reading Barcodes - Interpreting Part Numbers - Identifying a part from an image or picture - Augmented Reality overlay for part identification and information. - Enable voice commands for easier navigation and search of parts. Ideal Skills: - Strong background in computer vision and machine learning - Experience with QR code, Barcode, and Part number recognition - Proficient in developing software for hardware interfacing - Capable of implementing image recognition features The goal is to have a comprehensive recognition system that can be used to streamline car parts inventory and identification processes. Your past work should demonstrate your ability to deliver a project of this scope and complexity. Skills: PHP, C++ Programming, OpenCV, Image Recognition, Text Recognition
Fixed budget:
30 - 250 USD
2 days ago
|
|||||
Car Parts Recognition Engine Development
|
30 - 250 USD | 2 days ago |
Client Rank
- Risky
1 open job
Registered at: 06/12/2020
Netherlands
|
||
I am seeking a skilled engineer or team to create a code capable of recognizing all types of car parts, including engine components, body parts, and interior parts.
We have a app that calls an API on our server, this api needs to call the code that recognize the car part. The code must give back all available details of the recognized car parts. It is possible that recognition found more than 1 record. than it must give back all high score records. Our API build in PHP. Key Features: - Recognition of all car parts - Scanning and identifying QR codes - Reading Barcodes - Interpreting Part Numbers - Identifying a part from an image or picture - Augmented Reality overlay for part identification and information. - Enable voice commands for easier navigation and search of parts. Ideal Skills: - Strong background in computer vision and machine learning - Experience with QR code, Barcode, and Part number recognition - Proficient in developing software for hardware interfacing - Capable of implementing image recognition features The goal is to have a comprehensive recognition system that can be used to streamline car parts inventory and identification processes. Your past work should demonstrate your ability to deliver a project of this scope and complexity. Skills: PHP, C++ Programming, OpenCV, Image Recognition, Text Recognition
Fixed budget:
30 - 250 USD
2 days ago
|
Streamline your Upwork workflow and boost your earnings with our smart job search and filtering tools. Find better clients and land more contracts.