logo


Basics of Mineral Resource Modeling Using Open Source Tools.


Part 1 - Introduction


Full Open Source

We will learn the basics on how to estimate mineral resources using block models without spending a penny with software.
Some python and R programming is necessary and PostgreSQL is a plus in this process
Also you will need to use Paraview to visualize your model and CloudCompare do create domains.

Download Python3 here
Download R here
Download Paraview here
Download CloudCompare here
Download PostgreSQL here

These are the files needed.

  • Collar
  • Survey
  • Assay

    The tutorial PDF .

  • Download the PDF of this tutorial.

    We are good to go now.


  • Part 2 - Preparing the data


    The first step is loading all the exploration data files into a single database.
    We will create a database called 'jericho' and load the collar, assay and survey data into it.
    createdb jericho  --encoding=utf-8
    psql jericho -c "CREATE EXTENSION postgis;"
    psql jericho -c "CREATE TABLE collar(bhid varchar(30) NOT NULL, xcollar numeric NOT NULL,ycollar numeric NOT NULL,zcollar numeric NOT NULL, UNIQUE(bhid));"
    psql jericho -c "CREATE TABLE assay(bhid varchar(30) NOT NULL, _from numeric NOT NULL, _to numeric NOT NULL, cu numeric, au numeric, dmn varchar(10), UNIQUE(bhid, _from, _to));"
    psql jericho -c "CREATE TABLE survey(bhid varchar(30) NOT NULL,_at numeric NOT NULL,az numeric NOT NULL,dip numeric NOT NULL, UNIQUE(bhid, _at));"
    

    Now, using R we will populate the tables. Replace 'yourUsername' and 'secret' with the database user name and the password.
    $ R
    collar<-read.csv('collar_JERICHO.csv')
    survey<-read.csv('survey_JERICHO.csv')
    assay<-read.csv('assay_JERICHO.csv')
    library(RPostgreSQL)
    con<-dbConnect(PostgreSQL(),host='127.0.0.1',user='yourUserName', password='secret',dbname='jericho')
    names(collar)<-c('bhid','xcollar','ycollar','zcollar')
    dbWriteTable(con, "collar", collar, row.names=FALSE, append=TRUE)
    names(assay)<-c('bhid','_from','_to','cu','au','dmn')
    dbWriteTable(con, "assay", assay, row.names=FALSE, append=TRUE)
    names(survey)<-c('bhid','_at','az','dip')
    dbWriteTable(con, "survey", survey, row.names=FALSE, append=TRUE)
    



    Part 3 - Loading the data using pygslib


    Pygslib is based on GSLIB, a Fortran77 library. GSLIB is an acronym for Geostatistical Software LIBrary. This name was originally used for a collection of geostatistical programs developed at Stanford University. For more details on GSLIB download this book.
    We will start by loading the information from the database 'jericho' into pygslib. Refer to this site to see how to install pygslib. You will have to have also pandas, psycopg2, numpy and matplotlib on you python installation.
    We will now enter the drilling data and the sample assay data from the database and will create a drillhole object and saving it for visualization.
    import pandas as pd
    import matplotlib.pylab as plt
    import numpy as np
    import psycopg2 as pg
    import pygslib
    
    #------------------loading data from DB-----------------------------------------------
    connection = pg.connect("dbname=jericho user=yourUserName")
    collar = pd.read_sql('SELECT bhid as "BHID", xcollar as "XCOLLAR", ycollar as "YCOLLAR", zcollar as "ZCOLLAR" FROM collar', connection)
    survey = pd.read_sql('SELECT bhid as "BHID", _at as "AT", az as "AZ", dip as "DIP" FROM survey', connection)
    assay = pd.read_sql('SELECT bhid as "BHID", _from as "FROM", _to as "TO", _to-_from  as "Length", cu as "CU" FROM assay', connection)
    
    #----------------- Downhole data Object with survey and assay results-----------------
    dhdata=pygslib.drillhole.Drillhole(collar=collar, survey=survey)
    dhdata.addtable(assay, 'assay', overwrite = False)
    
    #--------------------calculating ideal composite interval (cint) ---------------------
    lm = dhdata.table['assay']['Length'].mode()[0]
    print ('Length Mode = ', lm)
    
    #--- composite creation
    dhdata.downh_composite('assay', variable_name= "CU", new_table_name= "CMP", cint = lm, minlen=-1, overwrite = True)
    
    #--- desurveing each sample interval in 3d space
    dhdata.desurvey('CMP',warns=False, endpoints=True)
    
    #--- replacing string type id with a unique integer type id
    dhdata.txt2intID('CMP')
    
    #--- Saving drillhole object for visualization with Paraview
    dhdata.intervals2vtk('CMP', 'holes')
    #...
    

    We can now open Paraview and load the file 'holes.vtp' that we just created




    Part 4 - About Domains


    In the next part we will need to use a domain to bound our data. We'll use a single domain (volume) to envelope the entire data expressed as a stereolithography (a .stl file). In Geology that is not common since we quite often have more than one domain expressed as different lithology or separating units of similar characteristics. A domain can also reflect the spatial orientation of a mineralized zone.
    We will see now how to create domain files from a point cloud using CloudCompare. Other software can be used, such as Blender, but this one is the simplest one to create model domains.
    First, what is a point cloud? Point cloud is a collection of data in x, y and z that define the spatial position of a point (a domain boundary in our case). The point cloud will define a surface that will be used to delimit our domain. Now we will see how to create the domain we used above.

    Creating the point cloud
    In R we will extract the top of first assay in the drillhole (collar) coordinates and the bottom coordinates of the last assay to create two xyz files (t1.xyz and b2.xyz) that will be used to generate the domain. Inside the code we created two functions to calculate the exact position of the points based on their azimuth and dip and depth inside the hole.
    # R   --------Domain creation------------------------------------------------------------------------------------------
    library(RPostgreSQL)
    
    #--- connecting with the database
    con<-dbConnect(PostgreSQL(),host='127.0.0.1',user='yourUserName', password='secret',dbname='jericho')
    
    #--- getting collar coordinates, azimuth and dip
    colar<-dbGetQuery(con, "select c.bhid as furo,s._at as prof, c.xcollar as x, c.ycollar as y, c.zcollar as z, s.dip as mergulho, s.az as az from collar as c, survey as s where c.bhid = s.bhid and s._at=0")
    colar$mergulho<-colar$mergulho*-1
    # creating a dataframe with borehole ID, dip, azimuth and empty coordinates
    estacoes<-dbGetQuery(con, "SELECT bhid as furo, _at as prof,null AS x, null AS y, null as z, dip as mergulho, az FROM survey")
    estacoes$mergulho<-estacoes$mergulho*-1
    
    #--- 3D positioning function
    tresD<-function(col,su){
     linha<-data.frame()
     for(i in 1:nrow(col)){
      x<-col[i,3]
      y<-col[i,4]
      z<-col[i,5]
      di<-0
      fur<-su[su[[1]]==col$furo[[i]],]
      fur<-fur[order(fur$prof),]
      for(j in 1:nrow(fur)){
       angulo<-fur[j,7]
       d<-fur[j,2]-di
       di<-d+di
       deltaz<-abs(d*sin(fur[j,6]*pi/180))
       raio<-d*cos(fur[j,6]*pi/180)
       x<-round(x+raio*sin(angulo*pi/180))
       y<-round(y+raio*cos(angulo*pi/180))
       z<-round(z-deltaz)
       linha<-rbind(linha,data.frame(furo=fur[j,1] ,prof=fur[j,2],x=x ,y=y ,z=z,mergulho=fur[j,6],az=fur[j,7]))
       z<-z
      }
     }
     lin<-rbind(linha,col)
     lin<-lin[order(lin$furo,lin$prof),]
     return(lin)
    }
    pts.XYZ<-tresD(colar,estacoes)
    
    #--- getting coordinates and elevation based on depth inside the drillhole
    obterXYZ<-function(pontos,furo,profu){
     sete<-pontos[as.character(pontos$furo)==furo,]
     refa<-sete[sete$prof==max(sete[sete$prof<=profu,]$prof),]
     ref<-refa[1,]
     d<-profu-ref$prof
     deltaz<-abs(d*sin(ref$mergulho*pi/180))
     raio<-d*cos(ref$mergulho*pi/180)
     z<-ref$z-deltaz
     angulo<-ref$az
     x<-ref$x+raio*sin(angulo*pi/180)
     y<-ref$y+raio*cos(angulo*pi/180)
     return(c(x,y,z))
    }
    
    #--- getting the top of the first interval assayed
    topoj1<-dbGetQuery(con, "select a.bhid as furo,min(a._from) as prof from assay as a, collar as c where c.bhid=a.bhid and a.dmn='J1' group by a.bhid order by a.bhid")
    topoj1$prof[topoj1$prof == 0] <- 0.01
    
    #--- getting the base of the last interval assayed
    basej2<-dbGetQuery(con, "select a.bhid as furo,max(a._to) as prof from assay as a, collar as c where c.bhid=a.bhid and a.dmn='J2' group by a.bhid order by a.bhid")
    basej2$prof[basej2$prof == 0] <- 0.01
    
    #--- calculating the coordinates and elevation for top
    dftopoj1<-data.frame(x=numeric(),y=numeric(),z=numeric(),i=numeric(),j=numeric(),k=numeric()) 
    for(i in 1 : dim(topoj1)[1]){
     dado<-obterXYZ(pts.XYZ,topoj1$furo[i],topoj1$prof[i])
     dftopoj1<-rbind(dftopoj1,c(dado[1],dado[2],dado[3]))
    }
    names(dftopoj1)<-c('x','y','z')
    write.table(dftopoj1,'t1.xyz', row.names = F,col.names=F, sep=' ')
    
    #--- calculating the coordinates and elevation for base
    dfbasej2<-data.frame(x=numeric(),y=numeric(),z=numeric(),i=numeric(),j=numeric(),k=numeric()) 
    for(i in 1 : dim(basej2)[1]){
     dado<-obterXYZ(pts.XYZ,basej2$furo[i],basej2$prof[i])
     dfbasej2<-rbind(dfbasej2,c(dado[1],dado[2],dado[3]))
    }
    names(dfbasej2)<-c('x','y','z')
    write.table(dfbasej2,'b2.xyz', row.names = F,col.names=F, sep=' ')
    

    Start CloudCompare and open the two xyz files. Each time you open a file click 'Apply' first to accept the data values and 'No' in the second window to transform the data. The result should look similar to the image below


    Now we will use a trick before merging the two point clouds. We will calculate the normals for b2.xyz by selecting it on the left and on the Menu 'Edit' use Normals-> Compute. The following should appear.


    Select 'Use preferred orientation', select '-Z' and click 'OK'.


    Repeat the procedure above selecting t1.xyz but this time select '+Z' on 'Use preferred orientation'. We are now ready to merge the two.
    Select b2.xyz and t1.xyz (using the ctrl key) and on the 'Edit' Menu select 'Merge'. Answer 'No' to the message and now the clouds are merged with the name 'b2 - Cloud' or 't1 - Cloud', depending on which one you select last.
    Create a mesh by selecting the merged object and on the Menu 'Plugins' select "PoissonRecon'. The following will appear.


    Select the tab 'Advanced' and fill the entries as shown below.


    The following screen will appear.


    The Poisson Reconstruction tends to make a mesh a bit smaller, inside the point cloud. We can fix this by Selecting Menu 'Edit->Multiply/Scale' and entering the fields as shown below.


    Now save the mesh as domain.stl using Menu 'File->Save'.


    Select the option 'ASCII' and Voilà! The domain is done.


    Part 5 - Block Model


    We will create a block model bound by the extension of the domain.stl file that we created. This is a simple example using one general domain that envelopes the full drillhole data. Real models use more then one domain most of the times.
    #...
    #---------------------------------- Loading Domain ---------------------------------------------------
    domain=pygslib.vtktools.loadSTL('domain.stl')
    insideDom=pygslib.vtktools.pointinsolid(domain,x=dhdata.table['CMP']['xm'].values,y=dhdata.table['CMP']['ym'].values,z=dhdata.table['CMP']['zm'].values)
    dhdata.table['CMP']['Domain']=insideDom.astype(int)
    
    #---create downhole file for visualization in paraview
    dhdata.intervals2vtk('CMP', 'holes2')
    
    #----------Creating the initial block model----------------------------------
    #---dimensions----------
    xorg = 498449
    yorg = 7677948
    zorg = -190
    dx = 5
    dy = 5
    dz = 5
    nx = 100
    ny = 449
    nz = 80
    
    #---initial block---------------
    bmodel=pygslib.blockmodel.Blockmodel(nx,ny,nz,xorg,yorg,zorg,dx,dy,dz)
    
    #---trimming it with our domain-----
    bmodel.fillwireframe(domain)
    bmodel.bmtable.rename(columns={'__in': 'D1'},inplace=True)
    
    #--- keeping/ranking blocks that are more than 50% within the domain
    bmodel.set_blocks(bmodel.bmtable[bmodel.bmtable['D1']> 0.5])
    
    #---creating block file for visualization in paraview
    bmodel.blocks2vtkUnstructuredGrid(path='block.vtu')
    #...
    

    Using Paraview to load the block that we created above




    Part 6 - Declustering the data


    Deutsch has an excellent lesson* that explains how declustering works.
    *Deutsch, C. V. (2015). Cell Declustering Parameter Selection. In J. L. Deutsch (Ed.), Geostatistics Lessons. Retrieved from http://www.geostatisticslessons.com/lessons/celldeclustering
    
    We will now cover the principles on how to decluster our data.
    First we have to find the ideal declustering cell size. This is done by finding the anisotropy relative to X (1), i.e. how the data is distributed in relation to the X range. We can see that Y range is some six times the X range and Z is 100% (1) of the X range. We also have to define the ncell (number of cells) and cmin and cmax values, usually use hundreds of cells and the cmax should be a value where the curve tends to become flat. Use a value such as 300 and check the value in the X axis where the curve becomes flat.
    #...
    #--Finding ideal declustering cell size----
    declusP = {
     'x'      :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'xm'],
     'y'      :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'ym'],
     'z'      :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'zm'],
     'vr'     :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'CU'],
     'anisy'  :  6.1,  # Y cell anisotropy (Ysize=size*Yanis),
     'anisz'  :  1.0,  # Z cell anisotropy (Zsize=siza*Zanis)
     'minmax' :  0,
     'ncell'  :  190,
     'cmin'   :  10,
     'cmax'   :  200.,
     'noff'   :  100,
     'maxcel' :  -1}
    wtopt,vrop,wtmin,wtmax,error,xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(declusP)
    ##Ploting the optimal declustering results (check for the first curve low)
    plt.plot (rxcs, rvrcr, '-o')
    plt.xlabel('Cell size in X')
    plt.ylabel('Declustered Mean')
    plt.show()
    plt.plot (rycs, rvrcr, '-o')
    plt.xlabel('Cell size in Y')
    plt.ylabel('Declustered Mean')
    plt.show()
    plt.plot (rzcs, rvrcr, '-o')
    plt.xlabel('Cell size in Z')
    plt.ylabel('Declustered Mean')
    plt.show()
    #...
    




    In our case the value 55 found for the cell in X will be used to calculate the declustering weight of our data as shown below. Compare the declustered mean with the data mean.
    #...
    declusP = {
     'x'      :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'xm'],
     'y'      :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'ym'],
     'z'      :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'zm'],
     'vr'     :  dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'CU'],
     'anisy'  :  6.1,    
     'anisz'  :  1.0,  
     'minmax' :  0,
     'ncell'  :  1,
     'cmin'   :  55, #
     'cmax'   :  55., #
     'noff'   :  100,
     'maxcel' :  -1}
    
    # ---declustering-------------
    wtopt,vrop,wtmin,wtmax,error,xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(declusP)
    
    # adding a declustering weight into our downhole interval table
    dhdata.table["CMP"]['declWT'] = 1
    dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'declWT'] = wtopt
    
    # Showing  the Copper declustered mean
    print('Declustered Mean: ',rvrcr[0])
    Declustered Mean:  0.8575031798602849
    
    # Showing the copper mean from data
    print('Data Mean: ',dhdata.table['CMP']['CU'].mean())
    Data Mean:  0.8383729811778992
    #...
    


    Part 7 - Data Statistics


    We will check some data statistics by plotting its histogram of values distribution and get some useful statistical parameters.
    #...
    parameters = {
     'hmin' : None,
     'hmax' : None,
     'ncl'  : 50,  
     'iwt'  : 1,   
     'ilog' : 1,   
     'icum' : 0,   
     'va'   : dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'CU'],
     'wt'   : dhdata.table["CMP"].loc[dhdata.table['CMP']['Domain']==1, 'declWT'],
     'figure' : None ,                       
     'title'  : 'Histogram CU grade',                 
     'xlabel' : 'CU (%)', 
     'ylabel' : 'frequency(%)', 
     'color' : 'green', 
     'legend': 'Acc freq', 
     'alpha' :  0.5, 
     'lwidth': 1, 
     'legendloc': 'top_left'}  
    
    #--- calculating and plotting the histogram 
    stats, fig = pygslib.plothtml.histgplt(parameters)
    print(stats)
    {'binval': array([0.00167435, 0. , ..., 0.00026139]),
     'nincls': array([ 5.,  0., ...,  1.]),
     'cl': array([1.20226443e-03, ..., 1.00000000e+01]),
     'clwidth': array([3.00120226e+00, ... 1.68236229e+00]),
     'xpt025': 0.02,
     'xlqt': 0.17,
     'xmed': 0.38,
     'xuqt': 1.01,
     'xpt975': 4.789946803505806,
     'xmin': 0.0,
     'xmax': 11.009999999999387,
     'xcvr': 1.4653973660866735,
     'xmen': 0.8575031798602861,
     'xvar': 1.579000587533448,
     'xfrmx': 0.06566884520927543,
     'dcl': 0.08}
    
    #--- plotting the histogram
    pygslib.plothtml.show(fig)
    #...
    

    Now we will plot the distribution of the accumulated frequencies for Cu.
    #...
    parameters_probplt = {
     'iwt'  : 1,
     'va'   : dhdata.table["CMP"].loc[(dhdata.table['CMP']['Domain']==1) & (dhdata.table['CMP']['CU']>0), 'CU'],
     'wt'   : dhdata.table["CMP"].loc[(dhdata.table['CMP']['Domain']==1) & (dhdata.table['CMP']['CU']>0), 'declWT'],                         
     'figure' : None, 
     'title'  : 'Probability Graphic',
     'xlabel' : 'Cu grade (%)', 
     'ylabel' : 'P[Cu<c]', 
     'xlog' : 1, 
     'ylog' : 1,             
     'style' : 'cross',
     'color' : 'red', 
     'legend': 'Declustered Cu', 
     'alpha' : 1, 
     'lwidth': 1, 
     'legendloc': 'bottom_right'} 
    
    results, fig2 = pygslib.plothtml.probplt(parameters_probplt)
    
    #--- show the  frequency plot
    pygslib.plothtml.show(fig2)
    #...
    

    So far so good, It looks like we have a gaussian behavior with a smooth accumulated frequency distribution. Now we will move to the most important topic.


    Part 8 - Semi-Variogram


    “Everything is related to everything else, but near things are more related than distant things...” (Tobler's first law of geography).
    This how a semi-variogram works. Closer things are more predictable and have less variability. While distant things are less predictable and are therefore, less related.
    A semi-variogram considers all points and their distance with variance. Sample points with close distances have a difference of values between them tending to be small (small semi-variance). Sample when the points are more distant, they are less likely to be similar and the semi-variance will be larger.
    With the increase of the distance the relationship between the sample points does not exists anymore and their variance starts to flatten out, and the sample values are not related between them.
    The semi-variogram generated from our data will give us following parameters:
  • Sill - The value that the model first flattens out.
  • Range - The distance that the model first flattens out.
  • Nugget - The value that the semi-variogram almost intercepts the y-value.

    Let's calculate our semi-variogram using R.
    library(RPostgreSQL)
    library(gstat) 
    library(sp)
    con<-dbConnect(PostgreSQL(),host='127.0.0.1',user='yourusername', password='secret',dbname='jericho')
    dados<-dbGetQuery(con, "select c.bhid as furo, c.xcollar as X, c.ycollar as Y, c.zcollar as Z, a.cu as CU,a.au as AU, a._from as prof from collar as c, assay as a where c.bhid = a.bhid and a.cu>0 order by furo,prof")
    colar<-dbGetQuery(con, "select c.bhid as furo,s._at as prof, c.xcollar as x, c.ycollar as y, c.zcollar as z, s.dip as mergulho, s.az as az from collar as c, survey as s where c.bhid = s.bhid and s._at=0")
    colar$mergulho<-colar$mergulho*-1
    estacoes<-dbGetQuery(con, "SELECT bhid as furo, _at as prof,null AS x, null AS y, null as z, dip as mergulho, az FROM survey")
    estacoes$mergulho<-estacoes$mergulho*-1
    tresD<-function(da,es){
     linha<-data.frame()
     for(i in 1:nrow(da)){
      x<-da[i,3]
      y<-da[i,4]
      z<-da[i,5]
      di<-0
      fur<-es[es[[1]]==da$furo[[i]],]
      fur<-fur[order(fur$prof),]
      for(j in 1:nrow(fur)){
       angulo<-fur[j,7]
       d<-fur[j,2]-di
       di<-d+di
       deltaz<-abs(d*sin(fur[j,6]*pi/180))
       raio<-d*cos(fur[j,6]*pi/180)
       x<-round(x+raio*sin(angulo*pi/180))
       y<-round(y+raio*cos(angulo*pi/180))
       z<-round(z-deltaz)
       linha<-rbind(linha,data.frame(furo=fur[j,1] ,prof=fur[j,2],x=x ,y=y ,z=z,mergulho=fur[j,6],az=fur[j,7]))
       z<-z
      }
     }
     lin<-rbind(linha,da)
     lin<-lin[order(lin$furo,lin$prof),]
     return(lin)
    }
    pts.XYZ<-tresD(colar,estacoes)
    obterXYZ<-function(pontos,furo,profu){
     sete<-pontos[as.character(pontos$furo)==furo,]
     refa<-sete[sete$prof==max(sete[sete$prof<=profu,]$prof),]
     ref<-refa[1,]
     d<-profu-ref$prof
     deltaz<-abs(d*sin(ref$mergulho*pi/180))
     raio<-d*cos(ref$mergulho*pi/180)
     z<-ref$z-deltaz
     angulo<-ref$az
     x<-ref$x+raio*sin(angulo*pi/180)
     y<-ref$y+raio*cos(angulo*pi/180)
     return(c(x,y,z))
    }
    for(i in 1 : dim(dados)[1]){
     dado<-obterXYZ(pts.XYZ,dados$furo[i],dados$prof[i])
     dados$x[i]<-dado[1]
     dados$y[i]<-dado[2]
     dados$z[i]<-dado[3]
    }
    coordinates(dados)=~x+y+z
    v<-variogram(cu~1, dados,cressie=T, cutoff=250, width=10, alpha=5, beta=0,tol.hor=90,tol.ver=90 )
    vario<-fit.variogram(v, vgm(c("Sph","Gau","Exp")))
    vario
    
      model     psill    range
    1   Nug 0.4817749  0.00000
    2   Sph 0.2141417 48.82616
    plot(v,vario)
    
    



  • Part 9 - Populating the Model


    Once we computed nugget, sill and range values we can now dimension our 3D krigging variogram based on these values and we need to find the angles we are going to use also.
    The angles and anisotropy are strongly related to the structural geology that controls the mineralogy. A set of range ,sill, nugget, angles and anisotropy are called 'structure' and we can use than one set or Structure when we can identify more than one 'structure' conditioning the mineralization. To keep it simple, we are going to use a single 'structure' in this example.
    We will use a anisotropy in Y as 6.1 times (292) the range and in Z we use the range (48). We also rotated the Y axis 5 degrees to the east.
    #...
    #-----------------------------Block Model Parameters----------------------------------------
    BlocP = {
     'x' : dhdata.table["CMP"]['xm'][dhdata.table["CMP"]['Domain']==1].values,
     'y' : dhdata.table["CMP"]['ym'][dhdata.table["CMP"]['Domain']==1].values,
     'z' : dhdata.table["CMP"]['zm'][dhdata.table["CMP"]['Domain']==1].values,
     'vr' : dhdata.table["CMP"]['CU'][dhdata.table["CMP"]['Domain']==1].values,
     'bhidint' : dhdata.table["CMP"]['BHIDint'][dhdata.table["CMP"]['Domain']==1].values, # BHID integer
     'nx' : nx,  
     'ny' : ny,  
     'nz' : nz,
     'xmn' : xorg,  
     'ymn' : yorg,  
     'zmn' : zorg,  
     'xsiz' : dx,  
     'ysiz' : dy,   
     'zsiz' : dz,
     'nxdis' : 5,  
     'nydis' : 5,  
     'nzdis' : 5,  
     'outx' : bmodel.bmtable['XC'].values,  
     'outy' : bmodel.bmtable['YC'].values,
     'outz' : bmodel.bmtable['ZC'].values,
     'radius'     : 292,   # search radii Y North 
     'radius1'    : 48,    # search radii X east  
     'radius2'    : 48,    # search radii Z elevation 
     'sang1'      : 5,     # rotates the principal direction  in the horizontal clockwise in degrees
     'sang2'      : 0,     # rotates the principal direction  from the horizontal measured in negative degrees down from horizontal  
     'sang3'      : 0,     # leaves the principal direction, defined by ang1 and ang2, unchanged and the two directions orthogonal to that
     #		          principal direction are rotated clockwise relative to the principal direction when looking toward the origin
     'ndmax'      : 40,    # number of data maximum
     'ndmin'      : 4,     # number of data minimun
     'noct'       : 0,     # octant number retained; 0 = not used
     'nbhid'      : 3,     # Field of borehole ID
     'ktype'      : 1,     # 0=SK,1=OK,2=non-st SK,3=exdrift
     'idbg'       : 0,     # 0 = no debug
     # VARIOGRAM DATA
     'c0'         : 0.48*1.67,   # Nugget Value, we require not normalized variance for GCOS, fix... multiply for actual variance
     'it'         : 1,           # Spheric
     'cc'         : 1.,          # contribution of structure (0 to 1)
     'aa'         : 48,          # max range
     'aa1'        : 48,          # min range
     'aa2'        : 48,          # max vertical range
     'ang1'       : 5,           # see sang1
     'ang2'       : 0,           # see sang2 
     'ang3'       : 0}           # see sang3
    
    estimate, debug, summary = pygslib.gslib.kt3d(BlocP)
    
    # adding block model estimate values to our blocks
    bmodel.bmtable['CU_OK'] = estimate['outest']
    bmodel.bmtable['CU_ID2'] = estimate['outidpower']
    bmodel.bmtable['CU_NN'] = estimate['outnn']
    bmodel.bmtable['CU_Lagrange'] = estimate['outlagrange']
    bmodel.bmtable['CU_KVar']= estimate['outkvar']
    bmodel.blocks2vtkUnstructuredGrid(path='blockModel.vtu')  
    #...
    
    Opening the modeled block with Paraview.

    The parameters selected above are very simple and had the objective of illustrating how pygslib creates a resource block model.
    Note the ellipsoid size and orientation based on the ranges chosen. Areas in yellow are outside the search and are not interpolated.


    Part 10 - More Statistics


    We will now check some statistic of our Block Model created above. First comparing the means.
    #...
    print('Ordinary Krigging mean: ',bmodel.bmtable['CU_OK'].mean())
    Ordinary Krigging mean:  0.998219
    print('Nerarest Neighbour mean:',bmodel.bmtable['CU_NN'].mean())
    Nerarest Neighbour mean: 0.88160115
    print('Inverse of the distance mean:',bmodel.bmtable['CU_ID2'].mean())
    Inverse of the distance mean: 0.9756047
    print('Declustered data mean',rvrcr[0])
    Declustered data mean 0.8575031798602849
    print('Data Mean: ',dhdata.table["CMP"]['CU'][dhdata.table["CMP"]['Domain']==1].mean())
    Data Mean:  0.844295253367543
    #...
    
    Checking the swath plots for Ordinary Krigging (OK), Nearest Neighbour (NN) and Inverse of the Distance Squared (ID2).
    #...
    bmodel.bmtable.groupby('XC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
    plt.show()
    bmodel.bmtable.groupby('YC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
    plt.show()
    bmodel.bmtable.groupby('ZC')[['CU_OK','CU_ID2','CU_NN']].mean().plot()
    plt.show()
    #...
    





    Part 11 - About cutoffs


    To finalize this quick tutorial we are going to see how to extract the most important information of modeling a mineral resource. Grades and Tonnages.
    Before we have to deal with an important information missing from the dataset and how to deal wit it. The information about the individual sample density is missing and we will assume a constant density of 2.9gr/cm3 for all of them. from this value we will calculate the tonnage of each block (weighted by the D1 parameter that describe if the block was entirely within the domain or if it was cut by it).
    #...
    bmodel.bmtable['DENSIDADE'] = 2.95
    bmodel.bmtable['TONNES'] =bmodel.bmtable['DENSIDADE']*125*bmodel.bmtable['D1']
    #...
    
    We identified that the Cu grade is stored at the column 8 and the tonnage is at column 14 of our bmtable. The code below will generate a table with variable cut-offs and their respective tonnages and mean grade based on the information we modeled here.
    #...
    print("Cut-off","\t","Tonnage","\t","@ Grade")
    for a in np.arange(0.5,6,0.5):
       ton=bmodel.bmtable[(bmodel.bmtable['CU_OK']>= a)].sum()[14]
       gr=bmodel.bmtable[(bmodel.bmtable['CU_OK']>= a)].mean()[8]
       print(a,"\t",round(ton),"\t",round(gr,3))
    
    Cut-off  Tonnage 	 @ Grade
    0.5 	 140125645.0 	 1.194
    1.0 	 72022775.0 	 1.631
    1.5 	 33240416.0 	 2.12
    2.0 	 15283120.0 	 2.591
    2.5 	 6416480.0 	 3.107
    3.0 	 3090033.0 	 3.554
    3.5 	 1224250.0 	 4.041
    4.0 	 560454.0 	 4.413
    4.5 	 79696.0 	 4.821
    5.0 	 34847.0 	 5.036
    5.5 	 0.0 		 nan
    
    #...
    
    We will now extract the blocks from the model based on cut-off values (1,2,3,4,and 5) and visualize them in paraview.
    #...
    modelcut1=bmodel
    modelcut1.set_blocks(modelcut1.bmtable[modelcut1.bmtable['CU_OK']> 1])
    modelcut1.blocks2vtkUnstructuredGrid(path='cutoff1_0.vtu')
    modelcut2=bmodel
    modelcut2.set_blocks(modelcut2.bmtable[modelcut2.bmtable['CU_OK']> 2])
    modelcut2.blocks2vtkUnstructuredGrid(path='cutoff2_0.vtu')
    modelcut3=bmodel
    modelcut3.set_blocks(modelcut3.bmtable[modelcut3.bmtable['CU_OK']> 3])
    modelcut3.blocks2vtkUnstructuredGrid(path='cutoff3_0.vtu')
    modelcut4=bmodel
    modelcut4.set_blocks(modelcut4.bmtable[modelcut4.bmtable['CU_OK']> 4])
    modelcut4.blocks2vtkUnstructuredGrid(path='cutoff4_0.vtu')
    modelcut5=bmodel
    modelcut5.set_blocks(modelcut5.bmtable[modelcut5.bmtable['CU_OK']> 5])
    modelcut5.blocks2vtkUnstructuredGrid(path='cutoff5_0.vtu')
    
    Cut-off 1% South view.

    Cut-off 1% West view.

    Cut-off 2% South view.

    Cut-off 2% West view.

    Cut-off 3% South view.

    Cut-off 3% West view.

    Cut-off 4% South view.

    Cut-off 4% West view.

    Cut-off 5% South view.

    Cut-off 5% West view.

    Now it is your turn to do the same using Au instead Cu.
    Have fun!

    André Luiz Lima Costa