线程同步锁的几种常用方式

#线程同步锁的几种常用方式 在应用程序中使用多线程的好处之一是每个线程都可以异步方式执行。对于windows应用程序,这允许在后台执行耗时的任务,同时应用程序窗口和控件保持响应状态。对于服务器应用程序,多线程处理提供了用不同线程处理每个传入请求的能力。否则,在满足上一个请求之前,将无法处理任何新请求。

然而,线程的异步性质意味着必须协调对文件句柄,网络连接和内存等资源的访问。否则,俩个或多个线程可能在同一个时间访问同一资源,且不能感知对方的操作。结果是不可预知的数据损坏。

对于针对整数数据类型的简单操作,可以通过System.Threading.Interlocked 类的成员完成同步处理线程。对于其他的所有数据类型和非线程安全资源,只有使用一下的构造才能安全执行多线程的处理。

##lock关键字## C# lock 语句可用于确保代码块运行完成,且不会被其他线程中断。这是通过在代码块的持续时间内获得给定对象的互斥锁来实现的。 lock语句被作为参数赋予对象,而且后面跟随的是一次只能由一个线程运行的代码块.例如:

public class TestThreading  
{  
    private System.Object lockThis = new System.Object();  

    public void Process()  
    {  

        lock (lockThis)  
        {  
            // Access thread-sensitive resources.  
        }  
    }  

}  提供给lock关键字的参数必须是基于引用类型的对象,并且用于定义锁的范围。在上例中,锁定范围仅限于此函数,因为函数外不存在lockthis对象的任何引用。如果存在此类引用,锁定范围将扩展到该对象。严格地说,所提供的对象仅用于唯一标识在多个线程之间共享的资源,因此它可以是任意类实例。但实际上,该对象通常表示需要线程同步的资源。例如,如果某个容器对象要被多个线程使用,则可以将容器传递给锁,而锁后面的同步代码块可访问该容器。只要其他线程在访问该容器前先锁定该容器,对该对象的访问就是安全同步的。

通常情况下,最好避免锁定public 类型,或者锁定超出应用程序控制对象的实例。例如,如果可以公开访问实例,则lock(this)可能会有问题,因为超出控制范围的代码也可能锁定对象。这可能导致死锁情况,即俩个或多个线程等待同一对象的释放。处于相同原因,锁定与对象相对的公共数据类型也可能会导致问题。锁定文本字符串尤其危险,因为文本字符串被公共语言运行时(CLR)暂存。这意味着对于整个程序,任何给定字符串文本都只有一个实例,就是这同一个对象表示了所有运行的应用程序域的所有线程的该文本。因此,在应用程序进程中任何位置锁定具有相同内容的字符串都会锁定应用程序中该字符串的所有实例。因此,最好锁定未被暂存的私有或受保护的成员。某些类提供专用于锁定的成员。例如:System.Array类型提供System.Array.SyncRoot。许多集合类型也提供了SyncRoot成员。

#监视器# 与lock关键字类似,监视器可防止多个线程同时执行代码块。 System.Threading.Monitor.Enter() 方法允许有且只有一个线程继续执行下面的语句;执行线程调用System.Threading.Monitor.Exit()之前,将阻止其他所有线程。这与使用lock关键字类似,例如:

lock (x)  
{  
    DoSomething();  
}   等效于:

System.Object obj = (System.Object)x;  
System.Threading.Monitor.Enter(obj);  
try  
{  
    DoSomething();  
}  
finally  
{  
    System.Threading.Monitor.Exit(obj);  
}  使用lock关键字通常优先于直接使用System.Threading.Monitor类。因为lock更简洁,而且即使在受保护的代码引发异常时,lock也可确保基础监视器被释放。这通过finally关键字完成,该关键字执行其关联的代码块,不会受是否引发异常的影响。

#同步事件和等待句柄#

使用锁或监视器有助于防止同时执行线程敏感的代码块,但这些构造不允许在线程之间传达事件。这就需要“同步事件”,它们是具有俩种状态之一(发出信号和未发出信号)的对象,可用于激活和挂起线程。线程可以通过等待未发出信号的同步事件来挂起,并且可以通过将事件状态改变为发出信号来激活。如果某个线程尝试等待已经发出信号的事件,则该线程将继续执行且无延迟。

存在俩种类型的同步事件: System.Threading.AutoResetEvent

System.Threading.ManualResetEvent

差别: AutoResetEvent会在启动线程时自动从发出信号的状态更改为未发出信号的状态。相反,ManualResetEvent允许其信号状态激活任意数量的线程,且在调用其System.Threading.EventWaitHandle.Reset()方法后,将仅还原到非信号状态。

可通过调用某个wait方法

System.Threading.WaitHandle.WaitOne(), System.Threading.WaitHandle.WaitAny(), System.Threading.WaitHandle.WaitAll(), 让线程等待事件。

WaitOne():将导致线程等待,直到单个事件进入信号状态. WaitAny():将阻止线程,直到一个或多个指示的事件进入信号状态。 WaitAll():将阻止线程,直到所有指示的事件进入信号状态。调用某一事件的System.Threading.EventWaitHandle.Set()方法时,该事件会进入信号状态。

在下例中,线程由 Main 函数创建和启动。 新线程使用 System.Threading.WaitHandle.WaitOne() 方法等待事件。 该线程将被挂起,直到由执行 Main 函数的主线程对事件发出信号。 事件发出信号后,辅助线程返回。 在这种情况下,因为该事件仅用于激活一个线程,因此可使用 类。

using System;  
using System.Threading;  

class ThreadingExample  
{  
    static AutoResetEvent autoEvent;  

    static void DoWork()  
    {  
        Console.WriteLine("   worker thread started, now waiting on event...");  
        autoEvent.WaitOne();  
        Console.WriteLine("   worker thread reactivated, now exiting...");  
    }  

    static void Main()  
    {  
        autoEvent = new AutoResetEvent(false);  

        Console.WriteLine("main thread starting worker thread...");  
        Thread t = new Thread(DoWork);  
        t.Start();  

        Console.WriteLine("main thread sleeping for 1 second...");  
        Thread.Sleep(1000);  

        Console.WriteLine("main thread signaling worker thread...");  
        autoEvent.Set();  
    }  
}   #Mutex对象# mutex:与监视器类似,防止多个线程在某一时间内同时执行某个代码块。术语:互相排斥(mutually exclusive)。但是与监视器不同,mutex可用于进程间的同步线程。

当用于进程间同步时,因为它将在另一个应用程序中使用,因此它不能通过全局变量或静态变量共享。必须为其提供一个名称,以便俩个应用程序能够访问同一个mutex对象

虽然mutex可用于进程内线程同步,但通常首选使用System.Threading.Monitor,因为监视器是为.Net Framwork专门设计的,因此可以更好地利用资源。与此相反,System.Threading.Mutex类是win32构造的包装器。虽然它比监视器更强大,单mutex需要互操作过渡,其计算成本比System.Threading.Monitor类所需的成本更多。

例如:每个线程调用 WaitOne(Int32) 方法来获取互斥体。 如果达到超时间隔,则此方法返回 false, ,线程既不也获取互斥体获取互斥锁保护的资源的访问权限。 ReleaseMutex 只能由获取互斥体的线程调用方法。

using System;
using System.Threading;

class Example
{
// Create a new Mutex. The creating thread does not own the mutex.
private static Mutex mut = new Mutex();
private const int numIterations = 1;
private const int numThreads = 3;

static void Main()
{
    Example ex = new Example();
    ex.StartThreads();
}

 private void StartThreads()
 {
    // Create the threads that will use the protected resource.
    for(int i = 0; i < numThreads; i++)
    {
        Thread newThread = new Thread(new ThreadStart(ThreadProc));
        newThread.Name = String.Format("Thread{0}", i + 1);
        newThread.Start();
    }

    // The main thread returns to Main and exits, but the application continues to
    // run until all foreground threads have exited.
}

private static void ThreadProc()
{
    for(int i = 0; i < numIterations; i++)
    {
        UseResource();
    }
}

// This method represents a resource that must be synchronized
// so that only one thread at a time can enter.
private static void UseResource()
{
    // Wait until it is safe to enter, and do not enter if the request times out.
    Console.WriteLine("{0} is requesting the mutex", Thread.CurrentThread.Name);
    if (mut.WaitOne(1000)) {
       Console.WriteLine("{0} has entered the protected area", 
           Thread.CurrentThread.Name);

       // Place code to access non-reentrant resources here.

       // Simulate some work.
       Thread.Sleep(5000);

       Console.WriteLine("{0} is leaving the protected area", 
           Thread.CurrentThread.Name);

       // Release the Mutex.
          mut.ReleaseMutex();
       Console.WriteLine("{0} has released the mutex", 
                         		Thread.CurrentThread.Name);
    }
    else {
       Console.WriteLine("{0} will not acquire the mutex", 
                         		Thread.CurrentThread.Name);
    }
}

~Example()
{
   mut.Dispose();
}
}
// The example displays output like the following:
//       Thread1 is requesting the mutex
//       Thread1 has entered the protected area
//       Thread2 is requesting the mutex
//       Thread3 is requesting the mutex
//       Thread2 will not acquire the mutex
//       Thread3 will not acquire the mutex
//       Thread1 is leaving the protected area
//       Thread1 has released the mutex

#Interlocked类

可以使用System.Threading.Interlocked类的方法防止多个线程同时更新或比较相同值时可能发生的问题。该类的方法可从任何线程安全地递增,递减,交换,比较值。

例如:下面的代码示例演示一个线程安全资源锁定机制。

using System;
using System.Threading;

namespace InterlockedExchange_Example
{
class MyInterlockedExchangeExampleClass
{
    //0 for false, 1 for true.
    private static int usingResource = 0;

    private const int numThreadIterations = 5;
    private const int numThreads = 10;

    static void Main()
    {
        Thread myThread;
        Random rnd = new Random();

        for(int i = 0; i < numThreads; i++)
        {
            myThread = new Thread(new ThreadStart(MyThreadProc));
            myThread.Name = String.Format("Thread{0}", i + 1);

            //Wait a random amount of time before starting next thread.
            Thread.Sleep(rnd.Next(0, 1000));
            myThread.Start();
        }
    }

    private static void MyThreadProc()
    {
        for(int i = 0; i < numThreadIterations; i++)
        {
            UseResource();

            //Wait 1 second before next attempt.
            Thread.Sleep(1000);
        }
    }

    //A simple method that denies reentrancy.
    static bool UseResource()
    {
        //0 indicates that the method is not in use.
        if(0 == Interlocked.Exchange(ref usingResource, 1))
        {
            Console.WriteLine("{0} acquired the lock", Thread.CurrentThread.Name);

            //Code to access a resource that is not thread safe would go here.

            //Simulate some work
            Thread.Sleep(500);

            Console.WriteLine("{0} exiting lock", Thread.CurrentThread.Name);

            //Release the lock
            Interlocked.Exchange(ref usingResource, 0);
            return true;
        }
        else
        {
            Console.WriteLine("   {0} was denied the lock", Thread.CurrentThread.Name);
            return false;
        }
    }

} }  

#ReaderWriter锁 在某些情况下,建议仅在写入数据时锁定资源,并允许多个客户端在数据未更新时同时读取数据。当线程正在修改资源时,System.Threading.ReaderWriterLock类强制执行对资源的独占访问,但在读取资源时允许非独占访问。ReaderWriter锁可用于代替排它锁。使用排它锁时,即使其他线程不需要更新数据,也会导致其他线程等待。

// The complete code is located in the ReaderWriterLock class topic.
using System;
using System.Threading;

public class Example
{
   static ReaderWriterLock rwl = new ReaderWriterLock();
   // Define the shared resource protected by the ReaderWriterLock.
   static int resource = 0;

   const int numThreads = 26;
   static bool running = true;
   static Random rnd = new Random();

   // Statistics.
   static int readerTimeouts = 0;
   static int writerTimeouts = 0;
   static int reads = 0;
   static int writes = 0;

   public static void Main()
   {
  // Start a series of threads to randomly read from and
  // write to the shared resource.
  Thread[] t = new Thread[numThreads];
  for (int i = 0; i < numThreads; i++){
     t[i] = new Thread(new ThreadStart(ThreadProc));
     t[i].Name = new String(Convert.ToChar(i + 65), 1);
     t[i].Start();
     if (i > 10)
        Thread.Sleep(300);
  }

  // Tell the threads to shut down and wait until they all finish.
  running = false;
  for (int i = 0; i < numThreads; i++)
     t[i].Join();

  // Display statistics.
  Console.WriteLine("\n{0} reads, {1} writes, {2} reader time-outs, {3} writer time-outs.",
        reads, writes, readerTimeouts, writerTimeouts);
  Console.Write("Press ENTER to exit... ");
  Console.ReadLine();
   }

   static void ThreadProc()
   {
  // Randomly select a way for the thread to read and write from the shared
  // resource.
  while (running) {
     double action = rnd.NextDouble();
     if (action < .8)
        ReadFromResource(10);
     else if (action < .81)
        ReleaseRestore(50);
     else if (action < .90)
        UpgradeDowngrade(100);
     else
        WriteToResource(100);
  }
   }

   // Request and release a reader lock, and handle time-outs.
   static void ReadFromResource(int timeOut)
   {
  try {
     rwl.AcquireReaderLock(timeOut);
     try {
        // It is safe for this thread to read from the shared resource.
        Display("reads resource value " + resource);
        Interlocked.Increment(ref reads);
     }
     finally {
        // Ensure that the lock is released.
        rwl.ReleaseReaderLock();
     }
  }
  catch (ApplicationException) {
     // The reader lock request timed out.
     Interlocked.Increment(ref readerTimeouts);
  }
}

   // Request and release the writer lock, and handle time-outs.
   static void WriteToResource(int timeOut)
   {
  try {
     rwl.AcquireWriterLock(timeOut);
     try {
        // It's safe for this thread to access from the shared resource.
        resource = rnd.Next(500);
        Display("writes resource value " + resource);
        Interlocked.Increment(ref writes);
     }
     finally {
        // Ensure that the lock is released.
        rwl.ReleaseWriterLock();
     }
  }
  catch (ApplicationException) {
     // The writer lock request timed out.
     Interlocked.Increment(ref writerTimeouts);
  }
}

   // Requests a reader lock, upgrades the reader lock to the writer
   // lock, and downgrades it to a reader lock again.
   static void UpgradeDowngrade(int timeOut)
   {
  try {
     rwl.AcquireReaderLock(timeOut);
     try {
        // It's safe for this thread to read from the shared resource.
        Display("reads resource value " + resource);
        Interlocked.Increment(ref reads);

        // To write to the resource, either release the reader lock and
        // request the writer lock, or upgrade the reader lock. Upgrading
        // the reader lock puts the thread in the write queue, behind any
        // other threads that might be waiting for the writer lock.
        try {
           LockCookie lc = rwl.UpgradeToWriterLock(timeOut);
           try {
              // It's safe for this thread to read or write from the shared resource.
              resource = rnd.Next(500);
              Display("writes resource value " + resource);
              Interlocked.Increment(ref writes);
           }
           finally {
              // Ensure that the lock is released.
              rwl.DowngradeFromWriterLock(ref lc);
           }
        }
        catch (ApplicationException) {
           // The upgrade request timed out.
           Interlocked.Increment(ref writerTimeouts);
        }

        // If the lock was downgraded, it's still safe to read from the resource.
        Display("reads resource value " + resource);
        Interlocked.Increment(ref reads);
     }
     finally {
        // Ensure that the lock is released.
        rwl.ReleaseReaderLock();
     }
  }
  catch (ApplicationException) {
     // The reader lock request timed out.
     Interlocked.Increment(ref readerTimeouts);
  }
   }

   // Release all locks and later restores the lock state.
   // Uses sequence numbers to determine whether another thread has
   // obtained a writer lock since this thread last accessed the resource.
   static void ReleaseRestore(int timeOut)
   {
  int lastWriter;

  try {
     rwl.AcquireReaderLock(timeOut);
     try {
        // It's safe for this thread to read from the shared resource,
        // so read and cache the resource value.
        int resourceValue = resource;     // Cache the resource value.
        Display("reads resource value " + resourceValue);
        Interlocked.Increment(ref reads);

        // Save the current writer sequence number.
        lastWriter = rwl.WriterSeqNum;

        // Release the lock and save a cookie so the lock can be restored later.
        LockCookie lc = rwl.ReleaseLock();

        // Wait for a random interval and then restore the previous state of the lock.
        Thread.Sleep(rnd.Next(250));
        rwl.RestoreLock(ref lc);

        // Check whether other threads obtained the writer lock in the interval.
        // If not, then the cached value of the resource is still valid.
        if (rwl.AnyWritersSince(lastWriter)) {
           resourceValue = resource;
           Interlocked.Increment(ref reads);
           Display("resource has changed " + resourceValue);
        }
        else {
           Display("resource has not changed " + resourceValue);
        }
     }
     finally {
        // Ensure that the lock is released.
        rwl.ReleaseReaderLock();
     }
  }
  catch (ApplicationException) {
     // The reader lock request timed out.
     Interlocked.Increment(ref readerTimeouts);
  }
   	   }

   // Helper method briefly displays the most recent thread action.
   static void Display(string msg)
   {
  Console.Write("Thread {0} {1}.       \r", Thread.CurrentThread.Name, msg);
   }
}

#死锁 线程同步在多线程应用程序中非常有用,但是产生 deadlock 总是十分危险。一旦产生了死锁,将有多个线程互相等待,从而导致应用程序暂停。 死锁类似于汽车停在十字路口一样,每个人都在等待别人先出发。 因此,避免死锁很重要;关键是要仔细规划。 开始编码之前,通常可以通过绘制多线程应用程序来预测死锁情况。

打赏一个呗

取消

感谢您的支持,我会继续努力的!

扫码支持
扫码支持
扫码打赏,你说多少就多少

打开支付宝扫一扫,即可进行扫码打赏哦