class Polygon { constructor() { this.name = 'Polygon'; } } const poly1 = new Polygon(); console.log(poly1.name); //Polygon function outer_func() { function inner_func() { this.prop01 = "value of prop01"; } inner_func(); } outer_func() console.log( "prop01=" + prop01 )//value of prop01 var prop02 = "value of prop02"; console.log( "prop02=" + this.prop02 )//value of prop02 outer_func.inner_func2 = function (){ this.prop03 = "value of prop03"; } outer_func.inner_func2(); console.log( "prop03=" + outer_func.prop03 )//value of prop03
Saturday, December 10, 2022
the this keyword of JS
Thursday, December 1, 2022
Hyperscan basics
---requirements---
On writing this document, we're using 5.4.0 and it requires:
- CMake 2.8.11
- Ragel 6.9
- Python 2.7
- Boost 1.57
If the Hyperscan library is used on x86 systems without SSSE3, the runtime API functions will resolve to functions that return HS_ARCH_ERROR instead of potentially executing illegal instructions.
To build an AVX512VBMI runtime, the CMake variable BUILD_AVX512VBMI must be enabled manually during configuration(-DBUILD_AVX512VBMI=on).
---concepts---
--scan interface--
Patterns are provided to a compilation interface which generates an immutable pattern database. The scan interface then can be used to scan a target data buffer
--Vectored mode--
the target data consists of a list of non-contiguous blocks that are available all at once. As for block mode, no retention of state is required(Streaming mode requires retention of states).
--Stream state--
some state space is required to store data that persists between scan calls for each stream. This allows Hyperscan to track matches that span multiple blocks of data.
--All matches--
scanning /foo.*bar/ against fooxyzbarbar will return two matches from Hyperscan. at the points corresponding to the ends of fooxyzbar and fooxyzbarbar. In contrast, libpcre semantics by default would report only one match at fooxyzbarbar (greedy semantics) or, if non-greedy semantics were switched on, one match at fooxyzbar. This means that switching between greedy and non-greedy semantics is a no-op in Hyperscan.
---hs_common.h---
- HS_INVALID: A parameter passed to this function was invalid.
- hs_error_t hs_stream_size(const hs_database_t *database, size_t *stream_size): Provides the size of the stream state allocated by a single stream opened against the given database; note database decide state space's size(refer to --Stream state-- for the concept of Stream state)
- hs_error_t hs_serialized_database_size(const char *bytes, const size_t length, size_t *deserialized_size):This API can be used to allocate a (shared) memory region prior to deserializing with the hs_deserialize_database_at() function.
- hs_error_t hs_valid_platform(void): This function can be called on any x86 platform to determine if the system provides the required instruction set.
---hs_compile.h---
- HS_FLAG_MULTILINE:This flag instructs the expression to make the ^ and $ tokens match newline characters as well as the start and end of the stream.
- HS_FLAG_SINGLEMATCH: If a group of expressions sharing a match ID specify the flag, then at most one match with the match ID will be generated per stream. so better don't share ID between expressions
- HS_FLAG_COMBINATION:This flag instructs Hyperscan to parse this expression as logical combination syntax.
To illustrate, here is an example combination expression: ((301 OR 302) AND 303) AND (304 OR NOT 305) If expression (zxxu:with ID) 301 matches at offset 10, the logical value of 301 is true while the other patterns’ values are false. Hence, the whole combination’s value is false. Then expression 303 matches at offset 20. Now the values of 301 and 303 are true while the other patterns’ values are still false. In this case, the combination’s value is true, so the combination expression raises a match at offset 20. Finally, expression 305 has matches at offset 30. Now the values of 301, 303 and 305 are true while the other patterns’ values are still false. In this case, the combination’s value is false and no match is raised.
- HS_TUNE_FAMILY_GENERIC:This indicates that the compiled database should not be tuned for any particular target platform.
---hs_runtime.h---
---limits---
The version of Hyperscan used to produce a compiled pattern database must match the version of Hyperscan used to scan with it.
Using the SOM flag entails a number of trade-offs and limitations:
- Reduced pattern support
- Increased stream state: more memory required
- Performance overhead
- Incompatible features: Some other Hyperscan pattern flags can NOT be used in combination with SOM.
- the start offset returned for a match may refer to a point in the stream before the current block being scanned. Hyperscan provides no facility for accessing earlier blocks; if the calling application needs to inspect historical data, then it must store it itself.
Saturday, January 23, 2021
how to renew pr card
By Zhixiong Xu nkxzhx@gmail.com At 2021-Jan-17
- fully complete and sign IMM5444E, when need input current date and if you failed to input "now", input a date same with the date you will sign the doc(for example 18/02/21).
- prepare passport's photocopy.
- prepare 2 W5cm*H7cm photos (into a small envelope) with info written at the back of the photos:
- Name and date of birth
- Date, name and address of photography studio
- photocopy of birth certificate, if under 18 years old.
- report cards, transcripts, and attendance records, if under 18 years old.
- pay fee and print receipt.
- make sure everything is ready, and mail them together with the checklist(imm5644e) to:
CPC - PR Card PO Box 10020 Sydney, NS B1P 7C1
Saturday, March 7, 2020
hello, Random Forest
The task here is to predict whether a bank currency note is authentic or not based on attributes such as variance (of wavelet transformed image).
The code is tuned from https://stackabuse.com/random-forest-algorithm-with-python-and-scikit-learn/
Get dataset.csv from https://drive.google.com/file/d/13nw-uRXPY8XIZQxKRNZ3yYlho-CYm_Qt/view
, column [0,4) is features(X: x0~x3), column 4 is class value(label).
This demo must run under conda, setup your conda env and go into yours( for me, ` conda activate zxxu_conda ` ):
(zxxu_conda) root@BadSectorsUbun...
"""
from os import name as os_name
from os.path import dirname
from os.path import join as join_path
OSN_HINT_UNIX = 'posix'
OSN_HINT_WINDOWS= 'nt'
OSN = os_name
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
#on error ` pip install -U scikit-learn ` under conda
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
PrjDir = dirname(__file__)
dataset_path = join_path(PrjDir, "is_bank_currency_fake.csv")
dataset = pd.read_csv(dataset_path)
#select all rows, and all columns( [x0, x4) )
X = dataset.iloc[:, 0:4].values
y = dataset.iloc[:, 4].values
num_recs = X.shape[0]
print( "number of records:%d" % num_recs )
#if you don't want same split results, suggest setting random_state
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
num_1s_in_y_test = np.count_nonzero(y_test)
num_0s_in_y_test = y_test.size - num_1s_in_y_test
print( "number of 1s VS 0s in y_test:%d VS %d" % (num_1s_in_y_test, num_0s_in_y_test))
"""
if use NUM_HIJACK_OFF_1s_of_y_test, 1s in y_test will decrease while increasing 0s
NUM_HIJACK_OFF_1s_of_y_test = 2
assert num_1s_in_y_test > NUM_HIJACK_OFF_1s_of_y_test
num_hijack_off_1s_of_y_test = 0
for i in range(0,num_1s_in_y_test):
if num_hijack_off_1s_of_y_test >= NUM_HIJACK_OFF_1s_of_y_test:
break
if y_test[i]:
y_test[i] = 0
num_hijack_off_1s_of_y_test += 1
"""
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#The number of trees in the forest
#Changed in sklearn 0.22: The default value of n_estimators changed from 10 to 100
regressor = RandomForestRegressor(n_estimators=20, random_state=0)
regressor.fit(X_train, y_train)
#now we have 20 percent samples predicated, but we don't know how good the predicates are
y_pred = regressor.predict(X_test)
"""
>>> import numpy as np
>>> np.round([0.49])
array([0.])
>>> np.round([0.51])
array([1.])
"""
"""
https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/
true positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease.
true negatives (TN): We predicted no, and they don't have the disease.
True Positive Rate = When it's actually yes, how often does it predict yes = also known as "Sensitivity" or "Recall"
True Negative Rate = When it's actually no, how often does it predict no = also known as "Specificity"
FP_rate = 1 - TN_rate
Accuracy: Overall, how often is the classifier correct? (TP+TN)/total
Misclassification Rate = also known as "Error Rate": 1 - Accuracy
Precision: When it predicts yes, how often is it correct? TP/predicted_yes
#Prevalence: How often does the yes condition actually occur in our sample? actual yes/total
"""
y_pred_rounded = y_pred.round()
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_rounded).ravel()
assert (fp + tn) == num_0s_in_y_test
assert (fn + tp) == num_1s_in_y_test
print("true positives rate=%.2f, true negatives rate=%.2f"
% (float(tp)/num_1s_in_y_test,float(tn)/num_0s_in_y_test))
"""
def of support:
The support is the number of occurrences of each class in y_true.
y_true is the ground truth (correct) target values.
if you don't understand the support column, use NUM_HIJACK_OFF_1s_of_y_test
F Score: This is a weighted average of the true positive rate (recall) and precision
"""
print(classification_report(y_test,y_pred_rounded))
Thursday, February 13, 2020
R for Beginners
in RStudio editor, use ctrl+shift+C to add multiline comments to selected lines.
?lm will show help of function lm()
help.search("tree") will display a list of the functions which help pages mention “tree”. Note that if some packages have been recently installed, it may be useful to refresh the database used by help.search using the option rebuild (e.g., help.search("tree", rebuild = TRUE)).
When R is running, variables, data, functions are stored in the active memory of the computer in the form of objects which have a name. The name of an object must start with a letter and CAN include dots(.)
#The functions available to the user are stored in a library localised on the disk in a directory called R_HOME/library
R.home() will show R_HOME, in Ubuntu 19.10 tested, it's "/usr/lib/R", this directory contains packages of functions.
The package named base is in a way the core of R, each package has a directory called R with a file named like the package , for instance, for the package base, this is the file R_HOME/library/base/R/base, This file contains functions of the package.
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# following example create&shows details of a frame:
# myFrame=data.frame(
# emp_id = c (1:5),
# emp_name = c("Rick","Dan","Michelle","Ryan","Gary")
# )
# ls.str(pat="myFrame")
#
# myFrame : 'data.frame': 5 obs. of 2 variables:
# $ emp_id : int 1 2 3 4 5
# $ emp_name: Factor w/ 5 levels "Dan","Gary","Michelle",..: 4 1 3 5 2
# if there are too many lines, use ls.str(pat="myFrame", max.level = -1) to hide details.
#To delete objects in memory, we use the function rm: rm(x) deletes the
#object x, rm(x,y) deletes both the objects x and y, rm(list=ls()) deletes all
#the objects in memory
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# > A <- "Gomphotherium"; compar <- TRUE; z <- -Inf
# > mode(A); mode(compar); mode(z); length(A)
# [1] "character"
# [1] "logical"
# [1] "numeric"
# [1] 1
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
read table from https://s3.amazonaws.com/assets.datacamp.com/blog_assets/test.txt :
1 6 a
2 7 b
3 8 c
4 9 d
5 10 e
url <- "https://s3.amazonaws.com/assets.datacamp.com/blog_assets/test.txt"
read.table(
url,
header = FALSE,
quote = "\"’",
colClasses = c("numeric","numeric","character"),
nrows = 2, #only read some rows
skip = 0, #start from the first row
check.names = TRUE, #checks that the variable|column names are valid
blank.lines.skip = TRUE,
comment.char = "" # no comment in this file
)
V1 V2 V3
1 1 6 a
2 2 7 b
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
use scan to read table:
scan(url, n = 2, blank.lines.skip = TRUE, comment.char="#")
Read 2 items
[1] 1 6
#sep = "" , not " "
scan(url, sep = "", what = list(0,0,""), nmax=2)
Read 2 records
[[1]]
[1] 1 2
[[2]]
[1] 6 7
[[3]]
[1] "a" "b"
//into 3 vectors|variables
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
sequence(c(3, 2, 4))
[1] 1 2 3 1 2 1 2 3 4
sequence(1:3)
[1] 1 1 2 1 2 3
seq(1, 5, 0.5)
[1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
The function gl (generate levels) is very useful because it generates regular
series of factors.
> gl(3, 5, length=30)
[1] 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3
Levels: 1 2 3
> gl(2, 6, label=c("Male", "Female"))
[1] Male Male Male Male Male Male
[7] Female Female Female Female Female Female
Levels: Male Female
> expand.grid(h=c(60,80), w=c(100, 300), sex=c("Male", "Female"))
h w sex
1 60 100 Male
2 80 100 Male
3 60 300 Male
4 80 300 Male
5 60 100 Female
6 80 100 Female
7 60 300 Female
8 80 300 Female
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> matrix(1:6, 2, 3)
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
> matrix(1:6, 2, 3, byrow=TRUE)
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> fac <- factor(c(1, 10))
> fac
[1] 1 10
Levels: 1 10
> as.numeric(fac)
[1] 1 2
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> x <- matrix(1:6, 2, 3)
> x
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
> x[, 3]
[1] 5 6
> x[, 3, drop = FALSE]
[,1]
[1,] 5
[2,] 6
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> x <- 1:10
> x[x >= 5] <- 20
> x
[1] 1 2 3 4 20 20 20 20 20 20
> x[x == 1] <- 25
> x
[1] 25 2 3 4 20 20 20 20 20 20
Friday, January 31, 2020
gcc important preprocess , compile and assemble options
-E stop after the preprocessing stage; The output is in the form of preprocessed source code, which is sent to the standard output. -S Stop after the stage of compilation, don't assemble. The output is in the form of an assembler code file. -c Compile or assemble the source files -o If this option is not specified, the default is to put an executable file in ‘a.out’, the object file in ‘source.o’, assembler file in ‘source.s’, a precompiled header file in ‘source.suffix.gch’, and all preprocessed C source on standard output.
Saturday, December 28, 2019
长日将尽第2章
我有点起生床,所以睡得很少很不好。大概一个小时之前我就醒了,由于天还很黑,而且想到要开一整天的车,所以试着睡个回笼觉,不过真的很难睡着,于是开灯刮胡子。
我刚才拨开窗帘(往外看)的时候,外面还是灰蒙蒙的,有一层薄雾笼罩着,使我很难看清前面的糕点店。我能看见前面朦朦胧胧的小桥,街道上空无一人;远方有锤子击打的声音,近处有后屋偶尔的咳嗽声。此外没有任何声响,没有迹象表明老板娘能在七点半按时提供早点。
我的脑海里此刻又浮现出了肯特小姐的信。其实我们也可以称她为贝恩夫人,我只在她少女时代认识她,之后她嫁给了西部的一个叫做小康普顿村的林场的林场主,也就是贝恩先生,到如今二十个年头了。她信里虽然没有明确提及细节,但是确切地提到即将离婚,所以我任然能称她为肯特小姐。
肯特小姐说她最喜欢二楼的卧室,从那里她能看到远处的草坪,她常常在窗前驻足,对着草坪发呆。她在信中又写到:“那时候我们在二楼总能看到您的父亲低着头在房前踱来踱去,好像在寻找遗失的珠宝一样。
很显然,我俩都还记得这个三十多年前的往事。确实,这事发生在她提到的某一个夏日的晚上,我还清楚地记得当我爬上二楼的楼梯的时候,我看到一束橙色的阳光。阳光照在阴暗的门廊里,使我能看到每一扇半开的卧室的门。我经过一个个卧室,看到肯特小姐的背影,她转过身来对我轻柔地说:史蒂文先生,没在忙啊。
楼下的白杨树影遮过了整个草坪。草坪斜斜地往上延伸,尽头是一个凉亭,父亲在凉亭里踱步,他全神贯注的样子,正如肯特小姐说的,很像是在寻找丢失的珠宝。
家父和肯顿小姐都是1922年的春天来到这里的,因为那时候我手下的女仆和男仆正好决定结婚并离职。男女仆的结合是府里最大的威胁,也是因为这个而不断地有仆人离职。当然了,女仆和男仆发生(恋爱)这种事也是意料之中,一个优秀的管家应该总是考虑到这一点,但是如果这种事在高级雇员中发生的多了,那么对工作的确是有影响的。另外,沉迷于浪漫而疏于工作是可恶的,这种现象是良好职业素养的最大障碍。
但是我这么说的时候其实不太记得肯顿小姐,她没有一直在府里干,而是因为结婚而离开了。但是我确信她在我手底下的时候真的是专心致志地工作,她总是把工作放在第一位。
我可能有点离题了,我之前提到, 我们正缺一个女管家和男佣人,肯顿小姐正好出色地填补了前一空缺。与此同时,由于约翰西佛尔先生的离世,家父不得不终止其在拉夫伯柔夫堡的出色工作。他技能出众,然后年过七旬,并且患着严重的关节炎和其他小病。这真的不好说哦,(我感觉)他很难竞争得赢出色的年轻人吧, 他唯有向达林顿宫展示他伟大的经历和出色的成绩。