import glob, os
# Current directory
current_dir = os.path.dirname(os.path.abspath(__file__))
# Directory where the data will reside, relative to 'darknet.exe'
path_data = 'data/obj/'
# Percentage of images to be used for the test set
percentage_test = 10;
# Create and/or truncate train.txt and test.txt
file_train = open('train.txt', 'w')
file_test = open('test.txt', 'w')
# Populate train.txt and test.txt
counter = 1
index_test = round(100 / percentage_test)
for pathAndFilename in glob.iglob(os.path.join(current_dir, "*.jpg")):
title, ext = os.path.splitext(os.path.basename(pathAndFilename))
if counter == index_test:
counter = 1
file_test.write(path_data + title + '.jpg' + "\n")
else:
file_train.write(path_data + title + '.jpg' + "\n")
counter = counter + 1
We are still using FLIR LEPTON 1, 3, 3.5 this year as well. Of course small drone test has steadily been upgraded.
This year we expanded three main items. Until now, we are too busy to update our blogs, but a lot of awesome idea we will introduce you, so we will do our best and we want to introduce it in the future.
今年はメンバーが3人になりました This year three members have become
東北と関西の共同作品です。こんな感じのロゴにしました。WT&Dに名称変更。
It is a collaboration work between Miyagi, Hyogo and Kyoto. We made it as a logo like this. Changed name into WT & D this time.
1.小型ドローンへのLEPTON搭載? LEPTON installed in a small drone?
確実に進化しています。成果のお話は会場で!!
It evolves reliably. Talk about achievements at the venue! !
えっドビー軽量化? weight reduction?
2.小型サーモグラフィ画像2個で立体表示可能か? Can 3D display be possible with two small thermographic images?
前回の記事で最後の方に動画を付けましたが、こんな感じです。
We attached a movie in the last article. it is like this.
You can check stereoscopic images with stereo glasses or smartphone goggles. It is an unknown area which was not likely to be real-time with WIFI simultaneous display of thermographic picture 2 screens at the same time. This means that you can recognize the perspective in the computer, but you can see that the distance is definitely recognized by Depth image and 3D mesh image of this movie.
3.小型サーモグラフィ画像で画像認識は可能か? Can image recognition be possible with a small thermographic image?
With a thermography with few pixels, I tried to see how accurately the hand shape of Rock-paper-scissors could be recognized. Initially we was letting LEPTON 3.5 recognize the result of learning with data created with LEPTON 3.5, but when I tried to recognize it with a rough LEPTON 1 (only 80 × 60 pixels), we found it to work properly It was. At the venue, it seems that WIFI 's crosstalk is expected, so we plan to use LEPTON 1 series applications that are stronger against jamming waves. We think that there are many people who have this sensor even in Japan, so I'm thinking about going to introduce how to create learning data and sources on our blog in the future. The software of this time can recognize only the palm of the human's fingers image. Is it a kind of temperature filter? The fact that this can be done spreads the application infinitely. It's not TX 2 but on the iPhone! ! (The image is iPhone 7 plus) What do you think?
And application. two kinds of Rock-paper-scissors images (June Chan), but my dog sings (easy decoding of sign language). Ideal for the brain train of elderly people! ! .
I made it using Thermal Cam 3 & 1 which all the applications are registered, but because it is multi-threaded, it can not be distributed from the apps store because it does not meet the apple application requirements. However, since distribution is possible in the form of Test Flight, please comment if you wish.
Thermal Cam、Thermal Cam3のインストール方法は、 How to install Thermal Cam, Thermal Cam 3,
この記事をご覧下さい。性能を上げるため来月中を目標にバージョンアップする予定です。 Please see this article. We plan to upgrade to the target next month to improve performance.
前回紹介したGitHub - hollance/YOLO-CoreML-MPSNNGraph: Tiny YOLO for iOS implemented using CoreML but also using the new MPS graph API. iOSでのYOLO実行環境はSwiftでした。SwiftはUI環境を整備しなくてはいけないので、私にはとてもやっかいです。これをOpenframeworksで実行させたら、簡単に画像を加工できるはず。この記事には、YOLO標準データをCoreML専用に変換するPythonスクリプトも入っているので再学習も可能のはずです。結果、ソースの変更でopenFrameworksでもCoreMLが実行できました。肝心の画像認識部分の実行速度は、Swift版より今回移植したC++版の方がかなり早くなっているようです。画像加工が簡単なので少し表示内容を変更しています。画像認識部分はマルチスレッドにしているので、画像を加工しても画像表示はCoreML(物体認識部分)に影響されません。ただし実行時iPhoneがかなり熱くなるので、連続して実行する場合は1分前後が目安です。iPhone7以降で実行可能。