multithreading - Is it a good idea that using the race of multi-threads to be a random number generator? -


due unpredictable of racing situation of multiple symmetric threads, use uncertainty build uniform random generator?

for example, codes http://www.cplusplus.com/reference/mutex/call_once/ , call twice generate random integer in [0,99].

#include <iostream>       // std::cout #include <thread>         // std::thread, std::this_thread::sleep_for #include <chrono>         // std::chrono::milliseconds #include <mutex>          // std::call_once, std::once_flag int winner; void set_winner (int x) { winner = x; } std::once_flag winner_flag; void wait_1000ms (int id) { // count 1000, waiting 1ms between increments:   (int i=0; i<1000; ++i)     std::this_thread::sleep_for(std::chrono::milliseconds(1)); // claim winner (only first such call executed):   std::call_once (winner_flag,set_winner,id); } int main () {   std::thread threads[10]; // spawn 10 threads:   (int i=0; i<10; ++i)     threads[i] = std::thread(wait_1000ms,i+1);   std::cout << "waiting first among 10 threads count 1000 ms...\n";   (auto& th : threads) th.join();   std::cout << "winner thread: " << winner << '\n';   return 0; } 

and call code (from http://advancedlinuxprogramming.com/alp-folder/alp-ch04-threads.pdf) times request length of random bits.

#include <pthread.h> #include <stdio.h> /* prints x’s stderr. parameter unused. not return. */ void* print_xs (void* unused) {   while (1) {     sleep( 1);     fputc ('x', stderr);   }   return null; } int main () {   pthread_t thread_id; /* create new thread. new thread run print_xs function. */   pthread_create (&thread_id, null, &print_xs, null); /* print o’s continuously stderr. */   while (1) {     sleep( 1);     fputc ('o', stderr);   }   return 0; } 

is real uniform , no period? sequence unable reproduce, may weak debugging.

the problem using thread timings random source (besides, say, efficiency hit) don't have basis believe values you'll have nice distribution. thread schedulers try fair, means they'll try give equal time each thread, modulated estimates of how time each thread needs. way done depends entirely on how scheduler implemented, , since scheduler isn't optimizing on randomness, randomness should treated "kinda, not very" random. if end being uniformly random, purely coincidence , not portable. worse, wouldn't provably uniformly-random, you'd have no reason trust you're doing works, , consequently no 1 else have reason trust you're doing works.

i'm not sure if purely hypothetical question or not. if thinking doing this, should stop , ask why you'd want this. if need "good enough" random source, use library default. if need cryptographically secure, use /dev/random best random bytes can, , if that's not enough bytes, feed cryptographically secure hash function in way that's justified best practices.

now, said, many oses use thread timing information input entropy accumulators, used later on high quality random bits. works because these values mixed other seemingly unpredictable events (network data, capacitor discharges, screen contents, cpu temperatures, clock time, etc.) in way combines entropy of each single high-entropy source. if you'd use information randomness, go in normal way either reading /dev/random or using os-specific mechanism high-entropy bytes.


Comments

Popular posts from this blog

Django REST Framework perform_create: You cannot call `.save()` after accessing `serializer.data` -

Why does Go error when trying to marshal this JSON? -